Skip to main content
Sam Altman of OpenAI, in a suit, gestures during a meeting, with a blurred background of a Pentagon building.

Editorial illustration for OpenAI safety staff exit as Altman dismisses Pentagon contract concerns

OpenAI Safety Team Exodus Sparks Defense Contract Debate

OpenAI safety staff exit as Altman dismisses Pentagon contract concerns

2 min read

OpenAI’s safety team has been thinning out at a pace that surprised insiders. Over a dozen engineers and researchers have left in the past month, many citing discomfort with the company’s expanding role in defense contracts. The departures follow a series of internal meetings where staff pressed leadership for more transparency about the implications of working with the Pentagon.

According to a profile built on more than 100 interviews and internal documents, the unease wasn’t just about technical risk—it touched on the moral weight of the projects themselves. When the concerns reached Sam Altman, his reply was unambiguous, cutting straight to the heart of the matter and reminding employees that policy judgments aren’t theirs to make.

When employees raised concerns after OpenAI's recent entry into Pentagon contracts, Altman was blunt: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that."

When employees raised concerns after OpenAI's recent entry into Pentagon contracts, Altman was blunt: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that." Overall, the fun-to-read profile, based on more than 100 interviews and internal documents, paints Altman as deeply polarizing, eager to please yet indifferent to the consequences of potential deceptions, according to one former board member. Altman's take: I think what some people want is a leader who is going to be absolutely sure of what they think and stick with it, and it's not going to change.

Did OpenAI’s safety team really leave because of a cultural mismatch? Altman was blunt. The New Yorker profile suggests Altman’s personal “vibes” clash with traditional AI‑safety approaches, and that disconnect appears to drive a wave of departures.

Anthropic is a competitor. It was launched by former OpenAI safety staff who cited the same concerns. When employees questioned the company’s recent Pentagon contracts, Altman responded bluntly, saying, “So maybe you think the Iran strike was good and the Venezuela invasion was bad.

You don’t get to weigh in on that.” That exchange highlights a tension between internal dissent and executive authority. The article, built on more than a hundred interviews, paints a picture of a firm where safety priorities may be subordinated to broader strategic goals, yet it stops short of confirming how this will affect future research directions. Unclear whether the exodus will slow OpenAI’s safety work or simply reshape it elsewhere.

For now, the staff turnover remains the most concrete indicator of the friction described.

Further Reading

Common Questions Answered

Why are OpenAI safety team members leaving the company?

Over a dozen engineers and researchers have departed OpenAI in the past month, primarily due to discomfort with the company's expanding defense contracts. Internal meetings revealed staff were pressing leadership for more transparency about the implications of working with the Pentagon, which created significant tension within the organization.

How did Sam Altman respond to employee concerns about Pentagon contracts?

Altman was notably dismissive of employee concerns, stating bluntly, 'So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that.' His response suggests a top-down approach that minimizes staff input on ethical considerations of defense partnerships.

What alternative did former OpenAI safety staff pursue after leaving the company?

Many former OpenAI safety team members chose to join Anthropic, a competing AI company that was specifically launched by ex-OpenAI staff who shared similar concerns about the organization's direction and ethical standards. This exodus represents a significant brain drain for OpenAI and highlights the ongoing tensions in AI safety and ethics.