Illustration for: OpenAI, Anthropic to spot underage users; ChatGPT controls, age checks
LLMs & Generative AI

OpenAI, Anthropic to spot underage users; ChatGPT controls, age checks

2 min read

OpenAI and Anthropic have announced a joint effort to flag users who appear to be minors before they interact with their large‑language models. The move follows months of pressure on AI firms to tighten safeguards around young audiences, especially after several high‑profile incidents where chatbots offered advice on self‑harm. By embedding predictive signals into the front‑end, the companies hope to intervene earlier—either by prompting a verification step or by limiting certain topics.

Critics have warned that such systems could misclassify users, while regulators argue they are a minimum requirement for responsible deployment. The rollout is slated to begin this quarter, with both firms pledging to refine the algorithm as data accumulates. This shift sits squarely within a broader push for online regulation that also includes mandatory age verification for a number of services.

OpenAI says the update to ChatGPT's Model Spec should result in "stronger …"

OpenAI later rolled out parental controls and said ChatGPT will no longer talk about suicide with teens. It's part of a larger push for online regulation that also includes mandatory age verification for a number of services. OpenAI says the update to ChatGPT's Model Spec should result in "stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher-risk territory." The company adds that ChatGPT will urge teens to contact emergency services or crisis resources if there are signs of "imminent risk." Along with this change, OpenAI says it's in the "early stages" of launching an age prediction model that will attempt to estimate someone's age. If it detects that someone may be under 18, OpenAI will automatically apply teen safeguards.

Related Topics: #OpenAI #Anthropic #ChatGPT #large-language models #age verification #teen safeguards #parental controls #predictive signals #self-harm

Will the new age‑prediction tools actually keep younger users safer? OpenAI says its updated Model Spec will produce “stronger” safeguards, and Anthropic is testing a way to flag possible under‑age accounts. The changes come alongside OpenAI’s recently added parental controls and a rule that ChatGPT will no longer discuss suicide with teens.

Yet the mechanics of predicting age remain opaque; it is unclear whether the algorithms can reliably distinguish a 13‑year‑old from an adult without invasive data collection. Moreover, the broader regulatory push for mandatory age verification across services raises questions about privacy and enforcement. If the predictions prove inaccurate, users could face unnecessary restrictions, while genuine underage users might still slip through.

The effort marks a clear shift toward tighter oversight, but its practical impact is still uncertain. As the companies roll out these features, observers will be watching for any measurable change in teen‑focused interactions and whether the promised safety gains materialize.

Further Reading

Common Questions Answered

How are OpenAI and Anthropic planning to flag underage users before they interact with large‑language models?

Both companies will embed predictive signals into the front‑end of their services to identify users who appear to be minors. When a potential underage user is detected, the system can prompt a verification step or restrict access to certain high‑risk topics.

What specific changes has OpenAI made to ChatGPT's Model Spec to improve safety for teens?

OpenAI updated the Model Spec to include stronger guardrails that prevent the chatbot from discussing suicide with teenage users. The update also encourages users to seek trusted offline support when conversations enter higher‑risk territory.

What role do parental controls play in OpenAI's recent safety rollout?

Parental controls allow guardians to set limits on the types of content their children can access through ChatGPT. These controls work alongside the new age‑prediction tools to provide an additional layer of protection for younger audiences.

Why have critics expressed concern about the opacity of the age‑prediction algorithms?

Critics argue that the mechanics of predicting a user's age remain unclear, making it difficult to assess whether the algorithms can reliably differentiate a 13‑year‑old from an adult without invasive data collection. This lack of transparency raises questions about both accuracy and privacy.