Skip to main content
OpenAI and Google staff protest, supporting Anthropic's lawsuit against Trump's Pentagon label. AI ethics debate.

Editorial illustration for OpenAI and Google staff back Anthropic's Pentagon lawsuit after Trump label

AI Giants Unite Against Pentagon's Anthropic Label

OpenAI and Google staff back Anthropic's Pentagon lawsuit after Trump label

3 min read

Employees at OpenAI and Google have signed on to Anthropic’s legal challenge against the Department of Defense, signaling a rare alignment among rival AI firms. The backing comes after the Trump administration slapped Anthropic with a “supply chain risk” tag—a label the government usually reserves for foreign entities it views as potential national‑security threats. While the designation itself is unusual, the timing raises questions about how policy decisions intersect with corporate stances on AI governance.

Anthropic’s refusal to compromise on two key issues appears to have triggered the label, prompting the company to sue the Pentagon for what it calls an unjustified restriction. Industry insiders note that the collective support from OpenAI and Google staff adds weight to Anthropic’s claim, suggesting broader concern over the precedent such a designation could set for domestic AI developers. The news follows a dramatic few weeks for Anthropic, in which the Trump administration labeled the company a supply chain risk -- a designation typically reserved for foreign companies that the government deems a potential risk to national security in some way -- after Anthropic stood firm on two re

The news follows a dramatic few weeks for Anthropic, in which the Trump administration labeled the company a supply chain risk -- a designation typically reserved for foreign companies that the government deems a potential risk to national security in some way -- after Anthropic stood firm on two red lines regarding acceptable use cases for military use of its technology: domestic mass surveillance and fully autonomous weapons (or AI systems with the power to kill with no human involvement). Negotiations broke down, followed by public insults and other AI companies stepping in to sign contracts allowing "any lawful use" of their technology. The supply chain risk designation not only prevents Anthropic from working on military contracts, it also blacklists other companies if they used Anthropic products in their line of work for the Pentagon, forcing them to uproot Claude if they wished to maintain their lucrative contracts. As the first model cleared for classified intelligence, however, Anthropic's tools are already deeply integrated into the Pentagon's work -- so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the U.S.

What does this collective push mean for the broader AI community? The amicus brief, signed by roughly 40 engineers and scientists from OpenAI and Google—including Jeff Dean, Google’s chief scientist and Gemini lead—adds a notable industry voice to Anthropic’s challenge against the Pentagon’s supply‑chain risk label. Their filing cites concerns about the Trump administration’s decision and the potential implications for AI technology, yet the brief stops short of outlining concrete policy remedies.

The lawsuit itself arose after Anthropic was singled out as a risk, a status traditionally applied to foreign firms, prompting questions about the criteria used for domestic AI companies. Whether the brief will sway the court or prompt a reassessment of the DoD’s labeling process remains uncertain. At present, the legal battle underscores a tension between national‑security safeguards and the evolving landscape of AI development, leaving both regulators and developers watching the outcome with cautious interest.

Further Reading

Common Questions Answered

Why did the Trump administration label Anthropic as a 'supply chain risk'?

The Trump administration applied the 'supply chain risk' label to Anthropic, a designation typically reserved for foreign companies perceived as potential national security threats. This unusual move came after Anthropic maintained firm boundaries about acceptable military technology use, specifically rejecting domestic mass surveillance and fully autonomous weapons systems.

How are OpenAI and Google staff supporting Anthropic's legal challenge?

Approximately 40 engineers and scientists from OpenAI and Google, including Jeff Dean (Google's chief scientist and Gemini lead), have signed an amicus brief supporting Anthropic's challenge against the Pentagon's supply chain risk label. Their collective filing highlights concerns about the Trump administration's decision and its potential broader implications for AI technology development.

What specific military technology use cases did Anthropic refuse to engage with?

Anthropic explicitly established two red lines regarding military technology use: rejecting domestic mass surveillance and refusing to develop fully autonomous weapons systems that could kill without human intervention. These principled stances appear to have influenced the Trump administration's unusual supply chain risk designation.