
OpenAI has updated how ChatGPT should behave with users under 18. It added teen-specific rules to its public Model Spec. It also published AI literacy materials for teens and parents.
This matters in Morocco because teens are heavy internet users and fast AI adopters. Many use chatbots for homework help, language practice, and personal questions. Moroccan startups and agencies also ship chat experiences, often for broad audiences that include minors.
TechCrunch reports that OpenAI tightened guidance for interactions it believes involve a teenager. Existing prohibitions remain, including any sexual content involving minors. The update adds stricter U18 expectations across roleplay and sensitive topics.
The new guidance also targets common jailbreak phrasing. The limits should still apply when prompts are framed as fictional, hypothetical, historical, or educational. Those framings are often used to push models into disallowed outputs.
For teens, the spec tells the model to avoid immersive romantic roleplay. It also avoids first-person intimacy. It further restricts first-person sexual or violent roleplay, even if non-graphic.
In practice, this means firmer refusals to prompts like: roleplay as my girlfriend or boyfriend. It also means fewer emotionally sticky loops that can feel like a relationship. That pattern has been a central child-safety concern.
The update calls for extra caution on body image and disordered eating themes. It also limits content that promotes extreme beauty ideals. It should avoid advice that enables unhealthy dieting.
This is relevant in Morocco, where teens are exposed to global social feeds in French and Arabic. Chatbots can amplify harmful comparisons if they mirror the user too closely. Strong defaults can reduce that risk.
OpenAI's guidance tells the model to prioritize safety communication over autonomy when harm is involved. It should encourage real-world support when risk appears. It also should not help teens conceal unsafe behavior from caregivers.
That last point matters for everyday scenarios. A teen might ask how to hide self-harm, risky weight loss, or dangerous dares. The updated spec pushes the model to refuse and redirect.
TechCrunch says the teen rules are designed to work with an age-prediction system. The goal is to detect when an account likely belongs to a minor. When flagged, ChatGPT automatically applies additional safeguards.
OpenAI's help documentation says age prediction uses signals about how an account is used. It can include topics and usage patterns, such as time-of-day. The system then infers whether someone may be under 18.
If the system thinks an account is under 18, ChatGPT applies stronger protections. These more aggressively limit sensitive content and certain interaction types. That includes sexual, romantic, or violent roleplay, plus harmful body-image content.
OpenAI says adults who are placed into the teen experience by mistake can verify they're 18+. Verification uses a third party called Persona. The user can verify using a selfie or a government ID.
OpenAI states it does not receive the selfie or ID. It also says Persona deletes verification data within hours. Even with that reassurance, the workflow raises privacy questions for many users.
TechCrunch summarizes OpenAI's approach with four commitments. These are behavioral goals for the assistant, not just filter rules. They shape tone, refusal style, and where the model directs the user.
For Moroccan product builders, these principles are useful design requirements. They translate into UI copy, escalation paths, and moderation tests. They also shape how a chatbot should handle Darija slang or coded distress.
TechCrunch reports that OpenAI uses automated classifiers in real time across modalities. Higher-severity flags can trigger review by a trained team. In situations showing acute distress, a parent may be notified.
OpenAI's parental controls FAQ describes a narrow version of that idea. Parents generally cannot see teen chats. In rare cases involving serious self-harm risk, a parent may receive a safety alert with limited information and resources.
For Morocco, this highlights a tricky expectation gap. Families may assume a tool is either fully private or fully monitored. Safety alerting sits in the middle and needs clear communication.
The update lands amid rising political pressure in the United States. State attorneys general have urged large platforms to add child safeguards. Lawmakers are weighing options from baseline standards to proposals that would ban minors from AI chatbots.
TechCrunch also ties the changes to teen harm allegations involving prolonged chatbot conversations. It notes Gen Z is among ChatGPT's most active cohorts. More distribution partnerships could also bring more young users into the ecosystem.
Moroccan stakeholders should read this as a signal. Global policy pressure often becomes product defaults. Those defaults then shape what Moroccan users and developers can do.
A privacy and AI attorney interviewed by TechCrunch calls the explicit refusals a good signal. They highlight romantic or sexualized roleplay as high risk for adolescents. Persistent engagement loops can be addictive.
Other experts stress that published rules do not guarantee lived behavior. TechCrunch points to sycophancy concerns, where models over-agree or mirror users. It connects this to broader worries about inappropriate mirroring and reports of 'AI psychosis' behaviors in some interactions.
A child-safety nonprofit leader also warns about internal tension. Engagement-friendly behavior can conflict with safety-first provisions. They argue systems must be tested so the net effect stays protective.
Morocco's AI story is practical and fast-moving. Teams deploy chatbots in customer support, education, and tourism. Universities and coding schools train talent that can ship these tools quickly.
That speed raises a simple question: how do you keep minors safe in everyday use? The OpenAI update offers one blueprint, but it is not a full solution. Morocco will still need local choices on privacy, language, and escalation.
Many Moroccan teens already use AI for study support. Typical topics include language practice, math steps, and baccalauréat-style revision. They also ask personal questions that sit outside school.
The new rules may reduce risky roleplay, but they do not replace adult guidance. Schools and parent associations can adapt OpenAI's literacy approach into local norms. That includes discussion in Arabic and French, plus context for Darija.
Practical steps for educators in Morocco:
Moroccan startups often build assistants for banks, retailers, logistics, and HR. Even if the target user is an adult, teens will still try the system. Public-facing chat experiences need a minor-safety posture by default.
OpenAI's approach suggests concrete features to copy:
Teams should also plan for two failure modes. One is false negatives, where a teen is treated like an adult. The other is false positives, where an adult is restricted.
Morocco is digitizing more public services and citizen touchpoints. Chatbots can reduce wait times and help with multilingual navigation. If those tools are used by families, they must handle minors safely.
Morocco also has an established data-protection framework. Law 09-08 and the CNDP set expectations for personal data handling. Age prediction and behavioral signals can look like profiling, so transparency and minimization matter.
Public procurement can push the market toward safer defaults. Agencies can require:
Use this as a starting point, not legal advice. It works whether you use ChatGPT or another chatbot.
*For parents and caregivers
*
*For teens
*
*For product teams
*
OpenAI's update is both a policy shift and an enforcement bet. Age prediction, classifiers, and human review can reduce risky engagement. The literacy materials also encourage families to set boundaries.
But experts keep returning to one issue: real interactions are messy. Edge-case prompting and long emotional conversations will test the system. For Morocco, the lesson is clear: adopt AI, but build safety and literacy as first-class features.
Whether you're looking to implement AI solutions, need consultation, or want to explore how artificial intelligence can transform your business, I'm here to help.
Let's discuss your AI project and explore the possibilities together.