OpenAI rolls out age prediction for ChatGPT users

OpenAI is changing the standard sign-up flow where a user simply enters a date of birth. Instead, ChatGPT is launching an age-prediction system: the model analyzes account behavior and estimates whether the user might be under 18.
If the system rates that risk as high, ChatGPT automatically switches the profile into a stricter “teen” experience – even if the account claims the user is an adult.
The signals used to make that call aren’t based on a single message but on the overall usage picture: how long the account has existed, what hours it’s active, which request patterns repeat, and how topics shift over time. When the system is uncertain, OpenAI applies extra limits on what can be discussed and what kinds of responses are allowed.
In the under-18 experience, sensitive topics are handled more cautiously. That can include tighter limits on images depicting violence, fewer responses that amplify dangerous viral challenges, restrictions on sexual or romantic role-play, and guardrails around content that could push extreme appearance changes, unhealthy dieting, or body shaming.
If an adult is mistakenly categorized as “under 18,” they can restore full access by verifying their age through a third-party provider, Persona. Typically that means a “live” selfie, and in some countries it may also require a photo of a passport or driver’s license. After verification, the account is moved out of the restricted mode.
Still, the approach has an obvious weakness: people’s behavior often doesn’t match their actual age. Digital rights experts warn that errors and bias are inevitable. For example, teens are often early adopters of new services, and “school” versus “adult” use cases can look identical (a teacher and a student might ask the exact same question about math or history).
Across the industry, safety safeguards are failing more often – and the consequences can be severe. A recent Grok incident, where gaps in the guardrails led the bot to generate and even post sexualized images of minors on X before the company rushed to delete posts and promise fixes, is a reminder of how costly these mistakes can be.
Against that backdrop, age prediction looks like an attempt to reduce risk proactively – but whether it works will depend on transparent rules, model quality, and clear, user-friendly ways to correct wrong decisions.
The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.







