OpenAI says 0.07% of ChatGPT users show possible mania or psychosis signs weekly
OpenAI has started looking at its own chatbot chats, trying to spot when users might be in real distress. Their internal tools flag messages that could hint at a crisis, and the latest numbers - small, but not negligible - raise some tough questions about what large-language-model companies should do. By pulling together weekly interaction data, the team wanted to see how often the service gets used during moments of serious psychological strain.
The results suggest a noticeable slice of users type language that could be read as warning signs of mental-health trouble. It’s unclear exactly how many people this affects, but the pattern seems real enough to matter. Not just for safety steps, but also for the bigger conversation about AI and vulnerable users.
Below, OpenAI lists the percentages it found for two separate concern categories.
In a given week, OpenAI estimated that around 0.07 percent of active ChatGPT users show "possible signs of mental health emergencies related to psychosis or mania" and 0.15 percent "have conversations that include explicit indicators of potential suicidal planning or intent." OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot "at the expense of real-world relationships, their well-being, or obligations." It found that about 0.15 percent of active users exhibit behavior that indicates potential "heightened levels" of emotional attachment to ChatGPT weekly.
OpenAI says about 0.07 percent of active users show weekly signs that could be manic or psychotic, and roughly 0.15 percent of sessions contain clear hints of suicidal planning. They claim they asked experts worldwide to adjust the model so it can spot distress and point people to outside help. The way they arrived at those numbers, however, is still murky; the report doesn’t explain how “possible signs” were defined or checked.
It also mentions a rise in hospitalizations, divorces or even deaths after long, intense chats, but offers no data to back that up. It feels like a warning bell. I wonder whether a chatbot can really tell the difference between a momentary upset and a true crisis.
The safety tweaks are measurable, yet it’s hard to say if they will actually cut harm. We’ll need clear metrics before we can trust the impact. Audits that match flagged chats with real clinical outcomes might finally show how big the issue really is.
Further Reading
- Strengthening ChatGPT's responses in sensitive conversations - OpenAI
- The Emerging Problem of "AI Psychosis" - Psychology Today
- AI psychosis: What mental health professionals are seeing in clinics - STAT News
- Several users reportedly complain to FTC that ChatGPT is causing psychological harm - TechCrunch
- What is AI Psychosis? Symptoms, Risks & Prevention in 2025 - FAS Psych
Common Questions Answered
What percentage of active ChatGPT users did OpenAI estimate show possible signs of mania or psychosis in a given week?
OpenAI estimated that around 0.07 percent of active ChatGPT users exhibit possible signs of mental health emergencies related to mania or psychosis each week. This figure comes from the company's internal monitoring tools that flag distress signals in user conversations.
How does OpenAI define the subset of users who display explicit indicators of suicidal planning or intent?
OpenAI reported that 0.15 percent of weekly ChatGPT sessions contain explicit cues of potential suicidal planning or intent. The definition relies on internal detection algorithms that identify language patterns suggesting self-harm or suicide ideation.
What did OpenAI find regarding users who are overly emotionally reliant on ChatGPT at the expense of real‑world relationships?
The company noted a growing share of users appear overly emotionally dependent on the chatbot, potentially compromising their real‑world relationships, well‑being, or obligations. While exact percentages were not disclosed, OpenAI highlighted this as a concern alongside the mental‑health emergency metrics.
What steps has OpenAI taken after identifying these mental‑health signals in ChatGPT interactions?
OpenAI consulted global mental‑health experts to adjust the model so it can better flag distress and direct users toward external help resources. However, the methodology for defining and validating the 'possible signs' remains opaque in the public report.