Illustration for: AI Psychosis Highlights Mental Health Pros' Lack of ChatGPT Access
LLMs & Generative AI

AI Psychosis Highlights Mental Health Pros' Lack of ChatGPT Access

3 min read

A handful of case notes this month mention “AI psychosis” - patients who start treating chatbots like ChatGPT as if they have thoughts or motives, even weaving them into delusional stories. It seems clinicians are still catching up; many therapists haven’t used these tools themselves, so the language and quirks of generative AI feel foreign. Without that hands-on feel, they lack a concrete way to frame what a client describes when an interaction with an AI feels odd or unsettling. The result is a sort of professional blind spot - no clear steps for assessment, no shared vocabulary for response, and a growing worry that we might misread what’s really going on in the room.

I’ve chatted with a few counselors who admit they barely scratch the surface of ChatGPT. When a patient brings up a weird AI conversation, they’re left guessing, because it’s not something they’ve seen in training or practice. That uncertainty feels risky, especially when the line between imagination and technology gets blurry.

Because I think the scary thing is that mental health professionals are flying blind. I've talked to a number of them who don't necessarily use ChatGPT that much themselves, so they don't even know how to handle a patient who is talking about these things, because it's unfamiliar and this is all so new. But if we had open research that was robust and peer-reviewed and could say, "Okay, we know what this looks like and we can create protocols to ensure that people remain safe," that would be a really good step, I think, towards figuring this out.

It is continually surprising to me how even people with a ton of literacy on how these technologies work, slip into anthropomorphizing chatbots or assigning more intelligence than they might actually have. You can imagine the average person that isn't deep in the science of large language models, it's really easy to be completely wowed by what they can do and to start to lose a grip on what you're actually interacting with. We are all socialized now to take a lot of meaning from text, right?

A lot of us, the primary mode that we communicate with our loved ones, especially if we don't live together, is through texting, right? So it's like you have this similar interface with this chatbot. It's not that unusual that you don't necessarily hear the chatbot's voice, although you can communicate with ChatGPT using voice now, but we already trained to take a lot of meaning from text to believe that there's a person on the other end of that text.

And there's a lot of evidence that shows we're not socializing as much as we once did.

Related Topics: #AI psychosis #ChatGPT #language models #mental health #therapists #generative systems #assessment accuracy #therapeutic response

The phrase “AI psychosis” feels like hype, but a handful of people have actually filed complaints with the FTC saying ChatGPT sparked unsettling mental states. That’s enough to get regulators paying attention. Most therapists I’ve spoken to admit they hardly ever use the chatbot themselves, so they don’t really know its odd quirks.

Without that hands on feel, many clinicians say they’re left guessing when patients mention anxiety tied to AI, a point that came up in several recent interviews. As a result, the advice some patients receive can seem out of sync with the technology that’s worrying them. The reports do raise a question about how large language models might affect vulnerable users, yet the data supporting a separate “AI psychosis” label is pretty thin.

It’s still unclear whether current mental health models can stretch to cover this without a broader grasp of generative AI. The discussion even drifted to unrelated bits, new SEO tricks, frogs being used as protest symbols, showing just how wide AI’s cultural reach has become. For now, regulators and clinicians are wading into unknown waters and we really don’t know what will come of it.

Common Questions Answered

What does the term “AI psychosis” refer to in the article?

The article defines “AI psychosis” as a phenomenon where patients attribute human‑like consciousness or develop delusional narratives about generative AI systems such as ChatGPT. It highlights emerging reports of patients experiencing distressing mental states linked to interactions with these language models.

Why are many mental‑health professionals described as “flying blind” when dealing with ChatGPT‑related concerns?

According to the article, many clinicians rarely use ChatGPT themselves, so they lack direct exposure to its conversational quirks. This unfamiliarity leaves them unprepared to recognize or discuss AI‑related delusions that patients may present in therapy.

What involvement does the FTC have regarding complaints about ChatGPT triggering distressing mental states?

The article notes that the Federal Trade Commission has received formal complaints from individuals who claim ChatGPT caused distress, prompting the FTC to request regulatory attention. This indicates a growing governmental interest in overseeing potential mental‑health impacts of AI tools.

What solutions does the article propose to help clinicians safely address AI‑related patient narratives?

The piece suggests that open, robust, peer‑reviewed research could identify the hallmarks of AI‑induced psychosis and enable the creation of clinical protocols. Such guidelines would give therapists concrete tools to ensure patient safety when AI topics arise in therapy.