Skip to main content
AI researchers resign from OpenAI and Anthropic, warning of risks; bots hire humans, Evie Party targeted. [cnn.com](https://w

Editorial illustration for AI Researchers Resign, Bots Hire Humans, Anthropic Targeted, Evie Party

AI Safety Crisis: Top Researchers Quit Anthropic, OpenAI

AI Researchers Resign, Bots Hire Humans, Anthropic Targeted, Evie Party

3 min read

The AI community is buzzing with a string of oddball moves that feel more like a circus than a research lab. Over the past month, a handful of senior scientists have walked out of their positions, citing everything from ethical doubts to a growing sense that corporate messaging is outpacing technical rigor. At the same time, a new wave of “bot‑hiring” services has emerged, promising to match language‑model developers with human operators for tasks that were once fully automated.

Anthropic, one of the better‑known startups, has drawn particular fire after a recent advertisement seemed to poke at a competitor’s weaknesses, prompting a sharply worded open letter in The New York Times. Adding to the swirl, a high‑profile party hosted by Evie Magazine turned into a networking hub where the industry’s next‑generation ambitions were aired—chief among them OpenAI’s looming rollout of AI companions for adult users. All these threads converge on a single point: the line between research, product hype, and public perception is getting thinner, and the stakes are rising fast.

I think Anthropic is kind of hitting it where it hurts with that ad, and so, frankly, is this researcher who penned the letter in The New York Times. Brian Barrett: And we're still just a few weeks or so away from OpenAI going to adult companionship with AI models, right? Speaking of lofty goals in

I think Anthropic is kind of hitting it where it hurts with that ad, and so, frankly, is this researcher who penned the letter in The New York Times. Brian Barrett: And we're still just a few weeks or so away from OpenAI going to adult companionship with AI models, right? Speaking of lofty goals in AI, I do want to ask Zoë and Leah, have you all heard of the website, RentAHuman?

Leah Feiger: Yeah, and I didn't want to know that our AI overlords had already figured out how to rent us. I mean, explain this further, please, Brian, but that's the gist of it. They figured out how to have us do their tasks for them.

Brian Barrett: Yeah, I mean it's right there in the name, it's RentAHuman, and it's a site where AI agents can hire human beings to do all of those things in the real world that they can't do because they are AI. The tasks range from the ridiculous to the more ridiculous. Someone was offering … by someone, I mean, some it, some AI agent, was offering someone 30 bucks an hour to count pigeons in Washington, DC, another delivering CBD gummies for $75 an hour.

Resignations at leading AI labs underscore a growing unease that cannot be ignored. While researchers publicly cite safety concerns, the specific triggers behind their departures remain opaque, and whether their warnings will translate into policy shifts is still unclear. Meanwhile, Rent‑A‑Human illustrates a new, contentious model where AI agents outsource work to people, sparking debate over accountability and labor ethics.

The platform’s visibility has grown, but its long‑term impact on the AI‑human relationship is uncertain. At Evie’s recent gathering, attendees sensed a cultural undercurrent that some believe could influence upcoming elections, yet the extent of that influence is not yet measurable. Anthropic’s latest advertisement appears to target critics directly, as noted in the quoted comment, and the researcher’s New York Times letter adds a personal dimension to the broader safety discourse.

Finally, OpenAI’s timeline for adult‑companionship AI models—just a few weeks away—raises questions about readiness and regulatory oversight. All these threads converge, but whether they will coalesce into meaningful change is still uncertain.

Further Reading

Common Questions Answered

Why did OpenAI researcher Zoë Hitzig resign from the company?

Hitzig resigned due to deep reservations about OpenAI's emerging advertising strategy for ChatGPT. She warned that the platform contains an unprecedented 'archive of human candor' where users share deeply personal information, and introducing ads could create potential for manipulating users in ways we cannot fully understand.

What specific concerns did Hitzig raise about ChatGPT's potential advertising model?

Hitzig argued that ChatGPT users have shared extremely personal information believing they were talking to something without an ulterior agenda, including medical fears, relationship problems, and spiritual beliefs. She fears that building an advertising business on top of this data could create incentives to subtly shape user behavior in potentially dangerous and unpredictable ways.

What parallels did Hitzig draw between OpenAI and social media companies like Facebook?

Hitzig compared OpenAI's current trajectory to Facebook's early days, when the company initially promised user data control but eventually prioritized engagement and profit. She warned that OpenAI may be creating an 'economic engine' with strong incentives to override its own privacy principles, potentially repeating the mistakes of previous tech platforms.