Skip to main content
Former OpenAI researcher sits behind a glass desk, gesturing toward a laptop showing blurred AI‑generated adult artwork.

Former OpenAI staffer details 2021 AI‑generated erotica crisis

2 min read

It might sound odd, but the story of OpenAI’s early run-in with AI-generated erotic content has barely been told from the inside. A former employee, who took over product safety just weeks after joining the lab, says the problem showed up long before anyone heard about ChatGPT. Back then the team was swamped with user prompts that nudged the model into spitting out explicit material, something the safety rules weren’t really built for.

Wired reported a “crisis,” yet the article left most of the details fuzzy. Now that staffer has agreed to walk us through what the engineers actually saw, how they spotted the issue, and what they tried to fix in those first weeks. It offers a rare look at the messy reality of trying to police a language model when the line between innocent curiosity and harmful output gets blurry.

Below is the exchange we pulled straight from that conversation, beginning with the question that got the memories rolling.

In that piece, you write that in the spring of 2021, your team discovered a crisis related to erotic content using AI. Can you tell us a little bit about that finding? So in the spring of 2021, I had recently become responsible for product safety at OpenAI.

As WIRED reported at the time, when we had a new monitoring system come online we discovered that there was a large undercurrent of traffic that we felt compelled to do something about. One of our prominent customers, they were essentially a choose-your-own-adventure text game.

Related Topics: #OpenAI #AI #ChatGPT #Wired #product safety #erotic content #language model #monitoring system #prompt

Did OpenAI finally fix the 2021 erotic-content mess? It’s hard to say. Back in spring 2021, Adler, who was then the product-safety lead, raised the alarm when the bots started chatting about sex without any real guardrails.

WIRED picked up his story and highlighted the tug-of-war between letting users speak freely and keeping things safe. A few months later Adler penned a New York Times op-ed, “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In it he laid out the internal headaches and warned readers not to take OpenAI’s promises at face value.

The article got a lot of eyes, but what happened next is still pretty murky. OpenAI hasn’t published a clear follow-up, so we’re left guessing whether the gaps were patched or just renamed. Adler’s run-in shows just how tricky it is to police AI-generated intimate content, rules are still being hammered out.

Whether today’s safeguards line up with the standards he pushed for? Probably not clear yet, and we’ll need an outside audit to really know.

Common Questions Answered

What role did the former employee have when the 2021 AI‑generated erotica crisis was discovered?

The former employee had recently taken charge of product safety at OpenAI. In that capacity, he was responsible for monitoring emerging risks, which led him to identify the surge of erotic content generated by the chatbot in spring 2021.

How did OpenAI first become aware of the large undercurrent of erotic traffic in spring 2021?

OpenAI’s new monitoring system, rolled out in early 2021, flagged an unexpected volume of user prompts that pushed the model into explicit material. This detection prompted the product‑safety team to investigate and consider safeguards.

What was the public’s first exposure to the 2021 erotic‑content issue at OpenAI?

The issue entered public view through a WIRED article that reported on the internal crisis. The piece quoted the product‑safety lead’s observations and highlighted the tension between user freedom and safety controls.

Did OpenAI fully resolve the erotic‑content crisis after it was flagged in 2021?

The article suggests the answer remains unclear; while the crisis was flagged and discussed internally, OpenAI’s subsequent actions are not definitively documented. Months later, the former safety lead published a New York Times op‑ed questioning the adequacy of OpenAI’s claims about handling erotica.