In 2025, Anthropic publicly embraced a larger defence footprint by announcing an agreement with the U.S. Department of Defence worth $200 million. Image used for representational purposes.
| Photo Credit: Reuters
Anthropic has spent the last year trying to get into an awkward position: close enough to the U.S. national security apparatus to be treated as a frontier supplier but far enough not to get burned. In one demonstration of this reputational yoga, it was revealed that the U.S. operation to abduct Venezuelan President Nicolas Maduro on January 3 used Anthropic’s AI model Claude.
More recently, Anthropic clashed with the Pentagon over safeguards to prevent fully autonomous weapons targeting and U.S. domestic surveillance. While Anthropic contended that these are non-negotiable limits, the Pentagon has held that commercial AI should be available for “all lawful purposes”. Even more recently, the Pentagon has been considering designating Anthropic a “supply chain risk” — a tag that could pressure contractors to certify that they’re not using Claude.
Why has Anthropic found itself in this bind?
In 2025, the company publicly embraced a larger defence footprint by announcing an agreement with the U.S. Department of Defence worth $200 million. It was a sign that Anthropic wanted to be the lab to say ‘yes’ to national security while still operating within limits, and in the process keep up a public reputation as a company that’s part of the military machinery. Anthropic has also been trying to present itself as an enterprise productivity company rather than as only a lab with a chatbot. Its partnership with Infosys, for instance, will pair its models with a firm that already sells compliance and governance services to industries that are heavily regulated.
Two ambitions
The reason for its bind is that a company that can claim to operate safely in government contexts with stringent security expectations can also plausibly sell itself to banks, manufacturers, and telecom companies. That is, to governments, Anthropic says, “We will help democratic states maintain a technological advantage, but we won’t accept deployments like autonomous targeting or expansive domestic surveillance”. As a result, it gets to say to enterprises, “We can operationalise frontier AI inside environments with strict compliance requirements”.
Unfortunately, these two ambitions have since collided. Anthropic appears to believe that conceding on autonomous targeting and domestic surveillance would destroy the line it has tried to draw with other frontier labs and entrants that are also courting defence customers.
The Pentagon, however, seems to be signalling that its vendors’ moral compunctions are beside the point, especially once the vendors are inside the defence supply chain.
The enterprise automation layer — i.e. the coding and agentic systems that allow Claude to be embedded directly within workflows rather than keep it as a chatbot that enterprises use in an ad hoc way — is still one of Anthropic’s main focus areas. And the company has also been trying to pitch its models’ safety features as an advantage because its logic seems to be, regulators and enterprises will prefer these models even when competitors develop more powerful alternatives. But this also means that if Anthropic yields to the Pentagon’s demand, it could lose its signature differentiation, whereas if it refuses, the Pentagon could make an example of it.
The fact is that while Anthropic can try to control how Claude is used, its control will weaken once Claude leaves the building. Anthropic can say “you may not use Claude for x” in the terms of service or train the model to refuse certain requests, but even then, large customers rarely use an AI model as a standalone chatbot. Instead, they access it through cloud platforms, embed it inside software tools like for data analysis and automation, and they adapt it for specific missions. In other words, customers can work around the terms of service and the question of Anthropic’s complicity still lingers.
In this sense, Anthropic’s recent decisions are probably coherent. It bet that a market nervous about AI — e.g. governments worried about adversaries or enterprises worried about liability — will pay a premium for a developer that can both deploy and restrain. And the dispute with the Pentagon is the first major demonstration of what it will cost Anthropic to do both at once.
Published – February 22, 2026 01:53 am IST
.