Skip to main content
OpenAI research lead in a dark suit exits a glass‑walled office, colleagues watching, ChatGPT logo on screen.

OpenAI research lead on ChatGPT mental‑health work departs amid policy push

2 min read

When the researcher who built most of ChatGPT’s mental-health tweaks quit, people started talking about a larger effort inside OpenAI. Over the past year the company has been under increasing pressure about how its flagship model behaves when users are in crisis. That pressure seems to have turned into a focused push to make the bot sound kinder, get its facts straight and stay safe when someone is clearly upset.

Inside, a model-policy team has taken charge - they’re gathering data, talking to outside experts and drafting rules meant to cut down on harmful replies. Their work showed up in an October briefing that charts what’s been done and notes that more than 170 mental-health professionals have been consulted so far. The leader’s exit underlines how high the stakes are, and the plan is still taking shape.

What comes next? Figuring out exactly how those policy tweaks will change ChatGPT’s replies to people who need help.

*Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot's responses. Model policy is one of the teams leading that work, spearheading an October report detailing the company's progress and consultations with more than 170 mental …

Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot's responses. Model policy is one of the teams leading that work, spearheading an October report detailing the company's progress and consultations with more than 170 mental health experts. In the report, OpenAI said hundreds of thousands of ChatGPT users may show signs of experiencing a manic or psychotic crisis every week, and that more than a million people "have conversations that include explicit indicators of potential suicidal planning or intent." Through an update to GPT-5, OpenAI said in the report it was able to reduce undesirable responses in these conversations by 65 to 80 percent.

Related Topics: #OpenAI #ChatGPT #mental health #model policy #GPT-5 #crisis #suicidal planning #distressed users

Andrea Vallone’s departure is a clear shift for OpenAI’s mental-health work. The model policy team now has to figure out how to move forward without its lead. The safety research group, which has been shaping ChatGPT’s response to users in distress, will keep reporting straight to senior management while a replacement is being sought.

Kayla Wood, the company’s spokesperson, confirmed the exit and said the search is already under way. Back in October the team published a report that highlighted progress and noted they had consulted more than 170 mental-health professionals. That effort still sits at the heart of OpenAI’s response plan, but it’s hard to say how much the loss of its head will change things.

The temporary reporting line hints at some stability, yet whether the October momentum will hold remains uncertain. OpenAI has felt pressure to improve how the bot handles crisis situations, and the policy work appears to be a key part of that push. As the year ends, the firm will need to fill the role and keep the focus the report stressed.

Common Questions Answered

Why did OpenAI's research lead on ChatGPT mental‑health work, Andrea Vallone, depart from the company?

Andrea Vallone left OpenAI amid a broader internal push to tighten policy around how ChatGPT handles distressed users. Her exit reflects the company's ongoing scrutiny and the need for a new leader to guide the model‑policy team’s mental‑health initiatives.

What role does the model‑policy team play in improving ChatGPT's responses to users in crisis?

The model‑policy team leads efforts to refine ChatGPT’s tone, accuracy, and safety for distressed users, compiling data, consulting experts, and drafting guidelines. Their October report highlighted progress and detailed consultations with over 170 mental‑health professionals.

How many ChatGPT users are estimated to show signs of a manic or psychotic crisis each week, according to OpenAI's report?

OpenAI’s October report estimated that hundreds of thousands of ChatGPT users may exhibit signs of a manic or psychotic crisis every week. This figure underscores the scale of the mental‑health challenge the company is addressing.

What steps has OpenAI taken to ensure ChatGPT’s safety for distressed users after increased scrutiny?

OpenAI has launched a dedicated model‑policy group to evaluate and improve the chatbot’s handling of crisis situations, consulted more than 170 mental‑health experts, and released a detailed progress report. The safety research group continues to report directly to senior management while a replacement for Vallone is sought.

Who confirmed Andrea Vallone’s departure and what did they say about the search for her replacement?

OpenAI spokesperson Kayla Wood confirmed Vallone’s departure and stated that the company is actively searching for a new leader to head the model‑policy team. The announcement emphasizes OpenAI’s commitment to maintaining its mental‑health efforts despite the leadership change.