Editorial illustration for OpenAI forms wellness council without suicide prevention expert
LLMs & Generative AI

OpenAI forms wellness council without suicide prevention expert

5 min read

OpenAI just announced a new safety and security committee, but the lineup is already sparking debate. The company says the group - called a “wellness” council - will steer its safety strategy, especially for younger users. Among the members is David Bickham, research director at Boston Children’s Hospital, which sounds promising.

Yet the roster doesn’t include anyone whose sole focus is suicide prevention, and that gap feels odd given how delicate AI-driven conversations can be. Teens, it seems, are turning to ChatGPT for advice more often than adults, a trend that likely brings different risks. OpenAI claims it wants “several council members with backgrounds in understanding how to build technology that supports healthy youth development,” because “teens use ChatGPT differently than adults.” I’m not sure how much influence a single researcher can have, but the omission of a suicide-prevention specialist is hard to ignore.

One priority was finding "several council members with backgrounds in understanding how to build technology that supports healthy youth development," OpenAI said, "because teens use ChatGPT differently than adults." That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids' mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on "how AI intersects with child cognitive and emotional development." These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren't particularly vulnerable to so-called "AI psychosis," a phenomenon where longer chats trigger mental health issues. In January, Bickham noted in an American Psychological Association article on AI in education that "little kids learn from characters" already—as they do things like watch Sesame Street—and form "parasocial relationships" with those characters.

Related Topics: #OpenAI #wellness council #safety #ChatGPT #teens #suicide prevention #David Bickham #Boston Children's Hospital #AI psychosis #youth development

OpenAI’s new wellness council feels like a step toward more accountability, but I can’t help noticing that there isn’t a suicide-prevention specialist on board. Putting together people who know about youth development does answer part of the lawsuit’s claims, yet it only scratches the surface of what’s really at stake. Making AI safe for people in crisis goes beyond just filtering age-appropriate content; it means thinking ahead to how the system might be used when someone is in a fragile moment, and that kind of expertise seems indispensable.

The council’s launch does suggest OpenAI is taking safety more seriously, but its real test will be whether it plugs that obvious gap. As the group gets started, I expect the tech world and ordinary users alike will be watching closely to see if OpenAI moves from quick fixes to a broader, forward-looking safety plan that doesn’t leave any major blind spot behind.

Common Questions Answered

Why is the absence of a suicide prevention expert on OpenAI's wellness council significant?

The omission is significant because the wellness council is tasked with guiding safety for sensitive AI interactions, particularly for younger users. Given that teens use ChatGPT differently than adults, the lack of dedicated expertise in suicide prevention raises questions about whether the council can fully address the mental health risks associated with AI.

What specific expertise does David Bickham bring to the wellness council?

David Bickham brings expertise as a research director at Boston Children's Hospital, where he has closely monitored how social media impacts kids' mental health. His background in understanding technology that supports healthy youth development is directly relevant to the council's priority of addressing how teens use ChatGPT.

How does the composition of the wellness council respond to the lawsuit's allegations mentioned in the article?

Assembling a team with expertise in youth development, including members like David Bickham, is a direct response to the lawsuit's allegations about AI safety for younger users. However, this addresses only one facet of the complex issue, as the absence of a suicide prevention expert indicates the response may be incomplete.

What is the stated priority for the wellness council regarding teen users of ChatGPT?

OpenAI stated that a priority was finding council members with backgrounds in building technology that supports healthy youth development, because teens use ChatGPT differently than adults. This effort includes experts who study the impact of technology on young people's mental health and development.