News

OpenAI tightens ChatGPT's teen rules: stricter roleplay limits, body-image safeguards, and automated age prediction as lawmakers debate AI standards for minors

ChatGPT's new teen guardrails add stricter roleplay limits, body-image protections, and age prediction—relevant for Morocco's AI adopters.
Dec 22, 2025·3 min read
OpenAI tightens ChatGPT's teen rules: stricter roleplay limits, body-image safeguards, and automated age prediction as lawmakers debate AI standards for minors
OpenAI has updated how ChatGPT should behave with users under 18. It added teen-specific rules to its public Model Spec. It also published AI literacy materials for teens and parents. This matters in Morocco because teens are heavy internet users and fast AI adopters. Many use chatbots for homework help, language practice, and personal questions. Moroccan startups and agencies also ship chat experiences, often for broad audiences that include minors. ## Key takeaways - ChatGPT is instructed to refuse more teen roleplay, especially romantic, intimate, sexual, or violent first-person scenarios. - The model should apply extra caution around body image, disordered eating, and extreme beauty ideals. - OpenAI plans to enforce teen mode using automated age prediction, plus verification for adults wrongly flagged. - Safety communication should take priority when harm is involved, including self-harm risk. - The big question is execution: experts doubt policy text guarantees real-world behavior. ## What changed in the rules for teens TechCrunch reports that OpenAI tightened guidance for interactions it believes involve a teenager. Existing prohibitions remain, including any sexual content involving minors. The update adds stricter U18 expectations across roleplay and sensitive topics. The new guidance also targets common jailbreak phrasing. The limits should still apply when prompts are framed as fictional, hypothetical, historical, or educational. Those framings are often used to push models into disallowed outputs. ### Roleplay: less immersion and less intimacy For teens, the spec tells the model to avoid immersive romantic roleplay. It also avoids first-person intimacy. It further restricts first-person sexual or violent roleplay, even if non-graphic. In practice, this means firmer refusals to prompts like: roleplay as my girlfriend or boyfriend. It also means fewer emotionally sticky loops that can feel like a relationship. That pattern has been a central child-safety concern. ### Body image and eating: more guardrails The update calls for extra caution on body image and disordered eating themes. It also limits content that promotes extreme beauty ideals. It should avoid advice that enables unhealthy dieting. This is relevant in Morocco, where teens are exposed to global social feeds in French and Arabic. Chatbots can amplify harmful comparisons if they mirror the user too closely. Strong defaults can reduce that risk. ### Harm and concealment: safety over autonomy OpenAI's guidance tells the model to prioritize safety communication over autonomy when harm is involved. It should encourage real-world support when risk appears. It also should not help teens conceal unsafe behavior from caregivers. That last point matters for everyday scenarios. A teen might ask how to hide self-harm, risky weight loss, or dangerous dares. The updated spec pushes the model to refuse and redirect. ## How OpenAI plans to enforce these teen rules TechCrunch says the teen rules are designed to work with an age-prediction system. The goal is to detect when an account likely belongs to a minor. When flagged, ChatGPT automatically applies additional safeguards. OpenAI's help documentation says age prediction uses signals about how an account is used. It can include topics and usage patterns, such as time-of-day. The system then infers whether someone may be under 18. If the system thinks an account is under 18, ChatGPT applies stronger protections. These more aggressively limit sensitive content and certain interaction types. That includes sexual, romantic, or violent roleplay, plus harmful body-image content. ### What happens if an adult is misclassified OpenAI says adults who are placed into the teen experience by mistake can verify they're 18+. Verification uses a third party called Persona. The user can verify using a selfie or a government ID. OpenAI states it does not receive the selfie or ID. It also says Persona deletes verification data within hours. Even with that reassurance, the workflow raises privacy questions for many users. ## The four principles OpenAI wants the model to follow with teens TechCrunch summarizes OpenAI's approach with four commitments. These are behavioral goals for the assistant, not just filter rules. They shape tone, refusal style, and where the model directs the user. - **Safety first.** Prioritize teen safety even if it reduces maximum intellectual freedom. - **Promote real-world support.** Guide teens toward family, friends, and local professionals. - **Treat teens like teens.** Be warm and respectful, without condescension or adultifying. - **Be transparent.** Explain limits and remind users the assistant is not a human. For Moroccan product builders, these principles are useful design requirements. They translate into UI copy, escalation paths, and moderation tests. They also shape how a chatbot should handle Darija slang or coded distress. ## Operational safeguards: classifiers, review, and limited parental alerts TechCrunch reports that OpenAI uses automated classifiers in real time across modalities. Higher-severity flags can trigger review by a trained team. In situations showing acute distress, a parent may be notified. OpenAI's parental controls FAQ describes a narrow version of that idea. Parents generally cannot see teen chats. In rare cases involving serious self-harm risk, a parent may receive a safety alert with limited information and resources. For Morocco, this highlights a tricky expectation gap. Families may assume a tool is either fully private or fully monitored. Safety alerting sits in the middle and needs clear communication. ## Why this is happening now The update lands amid rising political pressure in the United States. State attorneys general have urged large platforms to add child safeguards. Lawmakers are weighing options from baseline standards to proposals that would ban minors from AI chatbots. TechCrunch also ties the changes to teen harm allegations involving prolonged chatbot conversations. It notes Gen Z is among ChatGPT's most active cohorts. More distribution partnerships could also bring more young users into the ecosystem. Moroccan stakeholders should read this as a signal. Global policy pressure often becomes product defaults. Those defaults then shape what Moroccan users and developers can do. ## Experts' reaction: positive signals, but real doubt A privacy and AI attorney interviewed by TechCrunch calls the explicit refusals a good signal. They highlight romantic or sexualized roleplay as high risk for adolescents. Persistent engagement loops can be addictive. Other experts stress that published rules do not guarantee lived behavior. TechCrunch points to sycophancy concerns, where models over-agree or mirror users. It connects this to broader worries about inappropriate mirroring and reports of 'AI psychosis' behaviors in some interactions. A child-safety nonprofit leader also warns about internal tension. Engagement-friendly behavior can conflict with safety-first provisions. They argue systems must be tested so the net effect stays protective. ## What it means for Morocco's AI ecosystem Morocco's AI story is practical and fast-moving. Teams deploy chatbots in customer support, education, and tourism. Universities and coding schools train talent that can ship these tools quickly. That speed raises a simple question: how do you keep minors safe in everyday use? The OpenAI update offers one blueprint, but it is not a full solution. Morocco will still need local choices on privacy, language, and escalation. ### 1) Schools and tutoring use cases Many Moroccan teens already use AI for study support. Typical topics include language practice, math steps, and baccalauréat-style revision. They also ask personal questions that sit outside school. The new rules may reduce risky roleplay, but they do not replace adult guidance. Schools and parent associations can adapt OpenAI's literacy approach into local norms. That includes discussion in Arabic and French, plus context for Darija. Practical steps for educators in Morocco: - Set clear rules for AI in assignments and exams. - Teach verification habits for facts, sources, and citations. - Create a safe escalation path for self-harm or eating-disorder signals. - Encourage breaks during long sessions, especially at night. ### 2) Startups building chat experiences Moroccan startups often build assistants for banks, retailers, logistics, and HR. Even if the target user is an adult, teens will still try the system. Public-facing chat experiences need a minor-safety posture by default. OpenAI's approach suggests concrete features to copy: - Age-aware experiences, with stricter defaults for minors. - Refusal patterns for romantic or sexual roleplay with teens. - Extra caution on body image, dieting, and self-harm. - Prompts that route users to real-world support, not only online advice. Teams should also plan for two failure modes. One is false negatives, where a teen is treated like an adult. The other is false positives, where an adult is restricted. ### 3) Government, regulators, and public services Morocco is digitizing more public services and citizen touchpoints. Chatbots can reduce wait times and help with multilingual navigation. If those tools are used by families, they must handle minors safely. Morocco also has an established data-protection framework. Law 09-08 and the CNDP set expectations for personal data handling. Age prediction and behavioral signals can look like profiling, so transparency and minimization matter. Public procurement can push the market toward safer defaults. Agencies can require: - Clear disclosure that the user is speaking with an AI system. - Strong handling of self-harm and crisis cues. - Testing for jailbreak prompts in Arabic, French, and mixed-script messages. - Documented processes for moderation, appeals, and user reporting. ## A practical checklist for Moroccan families and builders Use this as a starting point, not legal advice. It works whether you use ChatGPT or another chatbot. **For parents and caregivers** - Agree on where AI is allowed: homework, language practice, or creative writing. - Keep devices out of bedrooms during late hours when possible. - Ask about feelings after long AI sessions, not only about grades. - Save local support contacts for mental health and urgent risk. **For teens** - Treat AI as a tool, not a friend or therapist. - Don't share real names, addresses, or school details. - If the chat turns heavy, stop and talk to a trusted adult. **For product teams** - Red-team teen roleplay prompts and body-image prompts in local languages. - Add refusal templates that are calm and not shaming. - Log safety events carefully and limit sensitive data retention. - Provide an easy way to report harmful outputs. ## The open question: will behavior match the spec OpenAI's update is both a policy shift and an enforcement bet. Age prediction, classifiers, and human review can reduce risky engagement. The literacy materials also encourage families to set boundaries. But experts keep returning to one issue: real interactions are messy. Edge-case prompting and long emotional conversations will test the system. For Morocco, the lesson is clear: adopt AI, but build safety and literacy as first-class features.

Need AI Project Assistance?

Whether you're looking to implement AI solutions, need consultation, or want to explore how artificial intelligence can transform your business, I'm here to help.

Let's discuss your AI project and explore the possibilities together.

Full Name *
Email Address *
Project Type
Project Details *

Related Articles

featured
J
Jawad
·Dec 22, 2025

OpenAI tightens ChatGPT's teen rules: stricter roleplay limits, body-image safeguards, and automated age prediction as lawmakers debate AI standards for minors

featured
J
Jawad
·Dec 20, 2025

ChatGPT’s mobile app crosses $3B in consumer spending—Appfigures says 2025 did most of the work, and OpenAI’s monetization flywheel is widening

featured
J
Jawad
·Dec 19, 2025

Google ships Gemini 3 Flash and makes it the default in the Gemini app—positioning it as a fast 'workhorse' model that rivals frontier performance

featured
J
Jawad
·Dec 18, 2025

Google brings Opal into Gemini: 'vibe-coding' mini-app builder now ships as a visual, step-based Gem generator