Skip to main content
Judge in federal court grants Anthropic temporary block on Pentagon ban, legal victory for AI company.

Editorial illustration for Judge grants Anthropic temporary block on Pentagon ban in federal court

Judge grants Anthropic temporary block on Pentagon ban...

Judge grants Anthropic temporary block on Pentagon ban in federal court

3 min read

A federal judge has issued a temporary injunction that halts the Pentagon’s recent prohibition on using Anthropic’s artificial‑intelligence tools. The order, filed in a Washington‑based court, stems from a dispute over whether the ban applies to contractors who supply information‑technology services to the Department of War, even when those services fall outside direct defense‑related projects. During the hearing, the judge pressed a Department of War spokesperson for clarification on the scope of the restriction, asking whether an employee could be dismissed for simply employing Anthropic software in a non‑military context.

The official’s response hinted at a narrow interpretation, but the judge followed up with a more pointed question about contractors whose work, while not classified as “national security,” still supports the department’s broader IT infrastructure. The exchange underscores the uncertainty facing tech workers and vendors navigating the new policy, and it sets the stage for the quoted dialogue that follows.

I'm not going to be terminated for using Anthropic -- is that accurate?" The representative for the Department of War responded, "For non-DoW work, that is my understanding." But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer. During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term "attempted corporate murder." She said, "I don't know if it's 'murder,' but it looks like an attempt to cripple Anthropic." "We are continuing to be irreparably injured by this directive," a lawyer for Anthropic said during the hearing, citing Hegseth's nine-paragraph X post. In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations" in the event it felt the military was crossing its red lines -- a theoretical situation that the Pentagon said it deemed an "unacceptable risk to national security." The judge's pre-released questions seem to challenge that statement, or at least request more information on it, stating, "What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?"

The court’s preliminary injunction halts the Pentagon’s blacklist for now, leaving Anthropic able to continue its operations while the case proceeds. Judge Lin’s characterization of the ban as “classic illegal First Amendment retaliation” underscores the legal framing of the dispute, yet the ruling does not resolve the underlying policy clash between the Department of War and the AI firm. The department’s spokesperson clarified that, for work outside the DoW, employees should not be terminated for using Anthropic’s tools, but the judge’s questioning of a contractor’s role in non‑national‑security IT services hints at lingering ambiguities.

Whether the injunction will stand after full adjudication remains uncertain; the parties have yet to outline how compliance will be monitored or what standards will govern future AI usage in defense contexts. In short, the decision offers a temporary reprieve for Anthropic, but it leaves open several procedural and substantive questions that the courts will need to address before a lasting resolution emerges.

Further Reading