Skip to main content
Donald Trump at a podium, with a "Make America Great Again" hat, addressing a crowd.

Editorial illustration for Trump orders agencies to cease using Anthropic AI; firm rejects Pentagon request

Trump Bans Anthropic AI from Federal Agencies Nationwide

Trump orders agencies to cease using Anthropic AI; firm rejects Pentagon request

2 min read

Donald Trump’s latest directive tells every federal department to stop using Anthropic’s Claude models, a move that reverberates through the tech‑government corridor. The order, issued on Tuesday, cites concerns over national‑security oversight, but it also forces agencies that have woven the chatbot into procurement workflows to scramble for alternatives. Meanwhile, the Pentagon has pressed Anthropic for a formal limitation on how its systems might be deployed in military contexts, a request that the startup’s leadership has publicly rebuffed.

The tension underscores a broader clash: a president demanding a clean break from a private AI vendor while the defense establishment seeks tighter controls on the same technology. It also puts Anthropic’s ethical stance under the microscope, as the company weighs its commercial ties against the moral weight of government use. In a statement Thursday, Amodei wrote that the Pentagon's “threats do not change our position: we cannot in good conscience accede to their request.” He added that Anthropic has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manne…

In a statement Thursday, Amodei wrote that the Pentagon's "threats do not change our position: we cannot in good conscience accede to their request." He added that Anthropic has "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner" but that in a "narrow set of cases, we believe AI can undermine, rather than defend, democratic values." Amodei went on to say that "should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.

What does this standoff mean for federal AI use? Trump’s post on Truth Social orders agencies to drop Anthropic’s Claude, accusing the company of trying to strong‑arm the Pentagon. The directive cites the CEO’s refusal to sign an updated agreement that would allow “any lawful use” of Anthropic’s technology, a requirement set by Defense Secretary Pete Hegseth in January.

Amodei’s response is unequivocal: the Pentagon’s threats do not alter Anthropic’s stance, and the firm cannot in good conscience accede to the request. He also notes that Anthropic has never objected to specific military operations nor sought to limit its technology ad hoc. Some tech workers have expressed frustration, but the article does not detail broader industry reactions.

Unclear whether other agencies will comply with the president’s order or seek alternative AI providers. The episode highlights a clash between political authority and corporate ethical positions, leaving the practical impact on government AI projects uncertain. Further clarification from the agencies or Anthropic would be needed to assess any operational changes.

Further Reading

Common Questions Answered

Why did Donald Trump order federal agencies to stop using Anthropic's Claude AI models?

Trump issued the directive due to concerns over national-security oversight and Anthropic's refusal to sign an updated agreement allowing 'any lawful use' of their technology. The order forces agencies that have integrated Claude into their workflows to seek alternative AI solutions.

What was Dario Amodei's response to the Pentagon's request for AI usage limitations?

Amodei firmly rejected the Pentagon's request, stating that Anthropic's threats do not change their position and they cannot in good conscience accede to the military's demands. He emphasized that while they have never objected to specific military operations, they believe AI can potentially undermine democratic values in certain narrow cases.

What specific requirement did Defense Secretary Pete Hegseth set for Anthropic in January?

In January, Pete Hegseth required Anthropic to sign an updated agreement that would allow 'any lawful use' of their AI technology by the military. Anthropic's refusal to sign this agreement was a key factor in Trump's order to cease using their Claude AI models across federal agencies.