Illustration for: OpenAI research lead on ChatGPT mental‑health work departs amid policy push
LLMs & Generative AI

OpenAI research lead on ChatGPT mental‑health work departs amid policy push

2 min read

The exit of the researcher who helped shape ChatGPT’s mental‑health features is drawing attention to a broader internal push at OpenAI. Over the past year, the company has felt mounting scrutiny over how its flagship model interacts with users in crisis. That scrutiny has translated into a concerted effort to refine the bot’s tone, accuracy and safety when someone is clearly distressed.

Internally, a dedicated model‑policy group has taken the lead, compiling data, consulting experts and drafting guidelines aimed at reducing harmful outcomes. Their work culminated in an October briefing that maps out progress and lists more than 170 mental‑health professionals consulted so far. As the departing leader’s departure underscores, the stakes are high and the roadmap is still evolving.

The next step? Understanding exactly how those policy moves will shape ChatGPT’s responses to users who need help.

*Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot's responses. Model policy is one of the teams leading that work, spearheading an October report detailing the company's progress and consultations with more than 170 mental he*

Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot's responses. Model policy is one of the teams leading that work, spearheading an October report detailing the company's progress and consultations with more than 170 mental health experts. In the report, OpenAI said hundreds of thousands of ChatGPT users may show signs of experiencing a manic or psychotic crisis every week, and that more than a million people "have conversations that include explicit indicators of potential suicidal planning or intent." Through an update to GPT-5, OpenAI said in the report it was able to reduce undesirable responses in these conversations by 65 to 80 percent.

Related Topics: #OpenAI #ChatGPT #mental health #model policy #GPT-5 #crisis #suicidal planning #distressed users

Andrea Vallone's exit marks a notable shift for OpenAI's mental‑health efforts. How will the model policy team adapt without its leader? The safety research group, which has guided ChatGPT's handling of distressed users, will continue reporting directly to senior management while OpenAI searches for a replacement.

OpenAI spokesperson Kayla Wood confirmed the departure and said a search is underway. In October, the model policy team released a report outlining progress and noting consultations with more than 170 mental‑health professionals. That work remains central to the company's response strategy, yet the impact of losing its head is uncertain.

The interim reporting structure suggests continuity, but whether the momentum of the October findings will be sustained is unclear. OpenAI has faced pressure to refine how the chatbot supports users in crisis, and the ongoing policy work appears to be a core component of that response. As the year closes, the organization will need to fill the vacancy and maintain the focus that the report emphasized.

Further Reading

Common Questions Answered

Why did OpenAI's research lead on ChatGPT mental‑health work, Andrea Vallone, depart from the company?

Andrea Vallone left OpenAI amid a broader internal push to tighten policy around how ChatGPT handles distressed users. Her exit reflects the company's ongoing scrutiny and the need for a new leader to guide the model‑policy team’s mental‑health initiatives.

What role does the model‑policy team play in improving ChatGPT's responses to users in crisis?

The model‑policy team leads efforts to refine ChatGPT’s tone, accuracy, and safety for distressed users, compiling data, consulting experts, and drafting guidelines. Their October report highlighted progress and detailed consultations with over 170 mental‑health professionals.

How many ChatGPT users are estimated to show signs of a manic or psychotic crisis each week, according to OpenAI's report?

OpenAI’s October report estimated that hundreds of thousands of ChatGPT users may exhibit signs of a manic or psychotic crisis every week. This figure underscores the scale of the mental‑health challenge the company is addressing.

What steps has OpenAI taken to ensure ChatGPT’s safety for distressed users after increased scrutiny?

OpenAI has launched a dedicated model‑policy group to evaluate and improve the chatbot’s handling of crisis situations, consulted more than 170 mental‑health experts, and released a detailed progress report. The safety research group continues to report directly to senior management while a replacement for Vallone is sought.

Who confirmed Andrea Vallone’s departure and what did they say about the search for her replacement?

OpenAI spokesperson Kayla Wood confirmed Vallone’s departure and stated that the company is actively searching for a new leader to head the model‑policy team. The announcement emphasizes OpenAI’s commitment to maintaining its mental‑health efforts despite the leadership change.