Our content generation service is experiencing issues. A human-curated summary is being prepared.
LLMs & Generative AI

Former OpenAI staffer details 2021 AI‑generated erotica crisis

2 min read

Why does this matter? Because the story behind OpenAI’s early brush with AI‑generated erotic content has rarely been told from the inside. A former employee, who took charge of product safety shortly after joining the lab, says the issue surfaced before the public ever heard of ChatGPT.

At the time, the team was wrestling with a flood of user‑generated prompts that pushed the model into creating explicit material—something the safety protocols weren’t built to handle. Wired’s coverage hinted at a “crisis,” but the details remained vague. The former staffer now agrees to walk us through what the team actually saw, how they identified the problem, and what steps were taken in those early days.

It’s a rare glimpse into the practical challenges of policing a language model when the line between harmless curiosity and harmful output blurs. The following exchange pulls directly from that conversation, starting with the question that sparked the recollection.

In that piece, you write that in the spring of 2021, your team discovered a crisis related to erotic content using AI. Can you tell us a little bit about that finding? So in the spring of 2021, I had recently become responsible for product safety at OpenAI.

As WIRED reported at the time, when we had a new monitoring system come online we discovered that there was a large undercurrent of traffic that we felt compelled to do something about. One of our prominent customers, they were essentially a choose-your-own-adventure text game.

Related Topics: #OpenAI #AI #ChatGPT #Wired #product safety #erotic content #language model #monitoring system #prompt

Did OpenAI fully address the 2021 erotic‑content crisis? The answer is unclear. In spring 2021, Adler, then product‑safety lead, flagged a problem when chatbots began handling erotic conversations without adequate safeguards.

His account, later echoed by WIRED, described a tension between user freedom and protective measures. Months later, Adler authored a New York Times piece titled “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” The article laid out the internal challenges and warned readers against taking OpenAI’s assurances at face value.

While the piece brought public attention to the issue, OpenAI’s subsequent actions remain opaque. No detailed follow‑up has been published, leaving observers to wonder whether the identified gaps have been closed or merely rebranded. Adler’s experience underscores the difficulty of policing AI‑generated intimate content, a domain where safety protocols are still evolving.

Whether OpenAI’s current safeguards meet the standards Adler once advocated for is uncertain, and further independent verification would be needed to assess progress.

Further Reading

Common Questions Answered

What role did the former employee have when the 2021 AI‑generated erotica crisis was discovered?

The former employee had recently taken charge of product safety at OpenAI. In that capacity, he was responsible for monitoring emerging risks, which led him to identify the surge of erotic content generated by the chatbot in spring 2021.

How did OpenAI first become aware of the large undercurrent of erotic traffic in spring 2021?

OpenAI’s new monitoring system, rolled out in early 2021, flagged an unexpected volume of user prompts that pushed the model into explicit material. This detection prompted the product‑safety team to investigate and consider safeguards.

What was the public’s first exposure to the 2021 erotic‑content issue at OpenAI?

The issue entered public view through a WIRED article that reported on the internal crisis. The piece quoted the product‑safety lead’s observations and highlighted the tension between user freedom and safety controls.

Did OpenAI fully resolve the erotic‑content crisis after it was flagged in 2021?

The article suggests the answer remains unclear; while the crisis was flagged and discussed internally, OpenAI’s subsequent actions are not definitively documented. Months later, the former safety lead published a New York Times op‑ed questioning the adequacy of OpenAI’s claims about handling erotica.