Skip to main content
Google expands Pentagon access to AI tools following Anthropic’s supply-chain risk assessment, highlighting growing military

Editorial illustration for Google widens Pentagon AI access after Anthropic labeled a supply‑chain risk

Google widens Pentagon AI access after Anthropic labeled...

Google widens Pentagon AI access after Anthropic labeled a supply‑chain risk

2 min read

Google has broadened the Department of Defense’s window into its AI suite, extending the reach that was previously limited to a handful of models. The move follows a standoff with Anthropic, a rival AI developer that declined to support certain military applications. In response, the Pentagon slapped Anthropic with a “supply‑chain risk” label—typically reserved for entities tied to foreign adversaries.

That designation sparked a legal battle, culminating in a federal judge’s temporary order that blocks the DoD from treating the startup as a risk. While Google’s expanded access appears to smooth the path for the services the DoD wants, the underlying dispute raises questions about how the government classifies and restricts AI providers. The clash also highlights the tension between commercial AI firms and national‑security priorities, a dynamic that could shape procurement policies for years to come.

Because Anthropic refused those use cases, the DoD branded the model maker a “supply‑chain risk” — a designation normally reserved for foreign adversaries. Anthropic and the DoD are now embroiled in a lawsuit, with a judge last month granting Anthropic an injunction against the designation while th…

Because Anthropic refused those use cases, the DoD branded the model maker a "supply-chain risk" -- a designation normally reserved for foreign adversaries. Anthropic and the DoD are now embroiled in a lawsuit, with a judge last month granting Anthropic an injunction against the designation while the case proceeds. Google marks the third AI company to try and turn Anthropic's loss into its own gain.

OpenAI immediately signed a deal with the DoD, as did xAI. Google's agreement includes some language saying that it doesn't intend for its AI to be used for domestic mass surveillance or in autonomous weapons, The Wall Street Journal reports, which is similar to contract language with OpenAI.

Google has now opened its AI platforms to the Pentagon’s classified networks, permitting all lawful uses, officials say. Yet the backdrop is a dispute that still simmers. Anthropic declined to offer the same breadth of access, insisting on guardrails against domestic surveillance and autonomous weapons.

Because of that refusal, the Department of Defense labeled the model maker a “supply‑chain risk,” a tag usually reserved for foreign adversaries. The designation sparked a lawsuit, and a judge recently granted Anthropic an injunction blocking the label. How the two sides will reconcile their positions remains unclear.

Meanwhile, Google’s move expands the military’s toolbox without the safeguards Anthropic demanded. Critics may argue that the lack of constraints could raise ethical questions, but the contract itself does not specify such limits. Whether this broader access will lead to new capabilities or simply duplicate existing tools is still to be determined.

The situation illustrates the tension between national‑security objectives and corporate responsibility, a balance that has yet to be fully resolved.

Further Reading