Editorial illustration for Defense Firm Builds Explosive AI Agents; US Moves to Limit AI Chip Sales
Pentagon Battles Anthropic Over Military AI Limits
Defense Firm Builds Explosive AI Agents; US Moves to Limit AI Chip Sales
A small defense contractor in Virginia has quietly demonstrated that AI can do more than crunch data—it can trigger explosives in a controlled test. The engineers programmed autonomous agents to locate a target, assess a safe distance and then detonate a charge, all without human input. While the experiment was framed as a proof‑of‑concept for future battlefield logistics, it raised eyebrows across Capitol Hill.
Lawmakers have been wrestling with how far to let machine learning touch lethal force, especially as rival powers pour resources into similar projects. At the same time, Washington has been tightening export controls on the silicon that powers these systems, aiming to keep the most advanced chips out of adversary hands. Yet the policy line isn’t uniform; recent moves by the previous administration eased some of those restrictions.
The tension between innovation and security is palpable, and it sets the stage for a broader debate about AI’s role in warfare.
Many policymakers believe that harnessing AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has sought to limit the sale of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen th
Many policymakers believe that harnessing AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has sought to limit the sale of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen those controls. "It's good for defense tech startups to push the envelope with AI integration," says Michael Horowitz, a professor at the University of Pennsylvania who previously served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities.
"That's exactly what they should be doing if the US is going to lead in military adoption of AI." Horowitz also notes, though, that harnessing the latest AI advances can prove particularly difficult in practice. Large language models are inherently unpredictable and AI agents--like the ones that control the popular AI assistant OpenClaw--can misbehave when given even relatively benign tasks like ordering goods online. Horowitz says it may be especially hard to demonstrate that such systems are robust from a cybersecurity standpoint--something that would be required for widespread military use.
Scout AI's recent demo involved several steps where AI had free rein over combat systems. At the outset of the mission the following command was fed into a Scout AI system known as Fury Orchestrator: A relatively large AI model with over a 100 billion parameters, which can run either on a secure cloud platform or an air-gapped computer on-site, interprets the initial command. Scout AI uses an undisclosed open source model with its restrictions removed.
This model then acts as an agent, issuing commands to smaller, 10-billion-parameter models running on the ground vehicles and the drones involved in the exercise.
Scout AI’s latest demo shows AI agents directing a self‑driving off‑road vehicle and two lethal drones to seek and destroy physical targets. The test took place at an undisclosed military base in central California, and the drones were equipped with explosives. Policymakers argue that AI could become a decisive factor in future military dominance, a view that underpins recent U.S.
moves to curb the export of advanced AI chips and related equipment to China. Yet the Trump administration recently eased some of those restrictions, a shift that adds complexity to the policy context. The technology’s ability to automate kinetic actions raises questions about command responsibility and fail‑safe mechanisms.
It's unclear whether existing legal frameworks can adequately address autonomous weapons that act on AI‑generated directives. Meanwhile, the demonstration underscores a growing willingness to embed AI in combat systems, even as the broader implications for international security remain uncertain. Further transparency about testing protocols and oversight would help clarify the risks involved.
Further Reading
- Papers with Code - Latest NLP Research - Papers with Code
- Hugging Face Daily Papers - Hugging Face
- ArXiv CS.CL (Computation and Language) - ArXiv
Common Questions Answered
What are the Pentagon's current efforts to deploy AI tools on classified networks?
[Reuters](https://news.bensbites.com) reports that the Pentagon is pushing top AI companies like OpenAI and Anthropic to make their AI tools available on classified networks without standard user restrictions. Pentagon Chief Technology Officer Emil Michael has discussed making AI models available across both unclassified and classified domains during a recent White House event.
Why is the military warning EOD technicians about using generative AI systems?
[DefenseScoop](https://scoopmedia.co/4aybQuy) revealed that the EOD Technology Center warned bomb technicians against uploading restricted technical material into generative AI systems. The warning specifically centers on the Automated Explosive Ordnance Disposal Publication System (AEODPS), which contains highly sensitive material that could potentially be leaked to adversaries if uploaded to AI platforms.
What challenges do AI model outputs present for export control agencies?
[Just Security](https://www.justsecurity.org/126643/ai-model-outputs-export-control/) highlights that AI model outputs represent a distinct national security challenge beyond traditional export controls. Foreign adversaries could potentially exploit publicly deployed AI models to generate controlled technical information like missile guidance system code or advanced radar component schematics, even without possessing the model's original weights.