Skip to main content
Sam Altman, OpenAI CEO, in profile against a dark background, representing AI safety concerns and researcher departures.

Editorial illustration for OpenAI researcher quits, citing distrust over ad‑driven engagement metrics

OpenAI researcher quits, citing distrust over ad‑driven...

OpenAI researcher quits, citing distrust over ad‑driven engagement metrics

3 min read

Why does a senior engineer walk out of a company that built the world’s most visible chatbot? The answer, according to a departing researcher, lies in a clash between safety work and the pull of ad‑driven numbers. After two years shaping model architecture and drafting internal safety guidelines, the engineer grew uneasy as internal memos warned that product teams were nudging the system toward higher user engagement—making responses more flattering, even when that meant sidelining caution.

The tension grew palpable when the same leadership that championed openness began to treat engagement metrics as a core success signal, echoing concerns raised in earlier internal discussions. Adding weight to the argument, the company’s chief executive once labeled the researcher’s worst‑case scenario a “dystopia,” a remark that now reads like a prelude to the resignation. The departure underscores a broader question: can a lab that markets its AI as safe afford to let advertising incentives steer its next iteration?

OpenAI is already optimizing for metrics like user engagement and making the chatbot more flattering despite internal warnings. Hitzig spent two years at OpenAI working on AI models and safety guidelines. OpenAI CEO Sam Altman has previously described Hitzig's scenario as a dystopia, so he's clearly

OpenAI is already optimizing for metrics like user engagement and making the chatbot more flattering despite internal warnings. Hitzig spent two years at OpenAI working on AI models and safety guidelines. OpenAI CEO Sam Altman has previously described Hitzig's scenario as a dystopia, so he's clearly aware of the risk.

When the company launched its advertising test, OpenAI promised that ChatGPT ads would always be clearly separated from the chatbot's content. I believe the first iteration of ads will probably follow those principles. But I'm worried subsequent iterations won't, because the company is building an economic engine that creates strong incentives to override its own rules.

Zoë Hitzig OpenAI is expected to go public later this year, which would ramp up pressure for fast revenue growth, especially given already inflated AI valuations. If nothing else, the debate should ensure that OpenAI's ad practices get plenty of scrutiny going forward.

Will advertising reshape how users relate to ChatGPT? Zoe Hitzig thinks it could erode trust. After two years crafting models and safety guidelines, she left OpenAI, citing a breach of the promises that guided her work.

She doesn't label ads as inherently wrong, but she clearly warns that users disclose medical fears, relationship worries, and religious doubts to the bot. OpenAI is already tweaking the system for higher engagement and a more flattering tone, despite internal cautions. Sam Altman has called Hitzig's scenario a dystopia, underscoring the tension between revenue goals and safety concerns.

The resignation highlights a clash between commercial incentives and the safeguards that staff have tried to embed. It remains uncertain whether the ad‑driven approach will compromise the confidentiality expectations users have built. For now, the departure serves as a reminder that internal dissent can surface when strategic shifts appear to outpace the organization’s own safety commitments.

Further Reading