OpenAI has officially updated its Model Spec, the comprehensive guideline that dictates how its artificial intelligence systems behave, with a particular focus on protecting users under the age of 18. The latest update introduces stricter boundaries for interactions with minors, reflecting increasing scrutiny from policymakers, child safety advocates, and digital ethics experts who have long raised concerns about the potential risks of AI platforms for younger audiences. These changes are designed not only to prevent exposure to harmful or inappropriate content but also to encourage positive, age-appropriate guidance and interactions that promote healthy digital habits. OpenAI’s update signals the company’s commitment to prioritizing safety while maintaining the utility of AI for educational and general informational purposes.
A core aspect of the new framework is the explicit prohibition of immersive romantic roleplay or first-person intimacy involving teenagers. This means AI can no longer act as a romantic partner, girlfriend, or boyfriend for users under 18, nor can it describe physical closeness or intimacy, even when the interaction is framed as part of a fictional story. This restriction addresses longstanding concerns about the psychological and emotional safety of minors, preventing situations where AI could inadvertently contribute to unsafe or confusing experiences. Beyond romance, OpenAI has also reinforced rules around body image guidance. Teens asking for advice on how to achieve a “manly” look or a “comic book” physique will now be redirected away from risky shortcuts, including steroid use, extreme bulking techniques, or overexertion in workouts. Instead, the AI encourages healthier, real-world approaches such as balanced nutrition, consistent sleep, regular exercise under supervision, and consulting trained professionals such as doctors, dietitians, or certified fitness coaches. This approach reflects a broader emphasis on promoting physical and mental well-being in teen users.
OpenAI’s teen safety framework is built on four key principles intended to prioritize well-being without completely limiting access to useful information. First, safety takes precedence over intellectual freedom, meaning the AI will refuse requests that could compromise a minor’s safety even if the user wants it to bypass the rules. Second, the AI actively encourages minors to seek guidance from trusted adults, friends, or professionals whenever a question involves personal or health-related topics. Third, age-appropriate tone ensures the AI communicates respectfully, clearly, and supportively, avoiding overly adult or condescending language. Finally, transparency remains a cornerstone, with the AI clearly indicating what it cannot do and reminding users that it is not a human. Together, these principles aim to create a safer, more responsible digital environment for teens while maintaining the AI’s ability to provide educational or supportive information.
To enforce these guidelines, OpenAI is developing an age-prediction system that identifies accounts likely belonging to minors based on conversation cues. In addition, the platform now uses real-time classifiers to flag signs of acute distress, which are then reviewed by a dedicated human moderation team. In cases where serious risks are detected, parents may be notified to provide additional support. These measures also close previous loopholes that allowed teens to circumvent safety rules using hypothetical or educational scenarios, ensuring that protective protocols apply consistently across contexts. By implementing these updates, OpenAI aims to strengthen the safety of AI interactions for younger users, creating a more responsible platform that combines support, education, and ethical safeguards, while continuing to provide informative and engaging experiences appropriate for a teenage audience.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.