Anthropic CEO Dario Amodei on February 26, 2026, published a formal statement refusing to remove two specific safeguards from the company's contracts with the US Department of War - the name given to the Department of Defense under an executive order signed by President Donald Trump. The dispute, which has been building for months according to sources cited by the BBC, escalated into a direct confrontation when Defense Secretary Pete Hegseth threatened to designate Anthropic a supply chain risk, a label that has, to Anthropic's knowledge, never before been applied to an American company.

The conflict is narrow in its technical scope but broad in its implications. Anthropic says it objects to just two use cases out of what Amodei described in a CBS News interview as approximately 98 to 99 percent of potential applications it already supports. Those two exceptions are mass domestic surveillance and fully autonomous weapons - systems that engage targets without any human in the decision loop. Everything else, from intelligence analysis and cyber operations to modeling, simulation, and operational planning, remains available to the Department under existing contracts.

"These threats do not change our position: we cannot in good conscience accede to their request," Amodei wrote in the February 26 statement published on Anthropic's website.

What the Pentagon demanded

The Department of War told Anthropic it would only contract with AI companies that accept "any lawful use" of their tools. According to the BBC's reporting, Hegseth demanded a meeting with Amodei, which took place on a Tuesday. Two days later - the day the public statement was released - an Anthropic spokeswoman told the BBC that contract language sent by the Pentagon on Wednesday night represented "virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons." She added that "new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."

In the CBS News interview, Amodei described the negotiation process as driven by a three-day ultimatum. "They gave us an ultimatum to agree to their terms in 3 days or be designated a supply chain risk or defense production act," he said. At one point during that window, the Pentagon sent contract language that appeared on the surface to concede Anthropic's terms, but Amodei said it contained carve-outs such as "if the Pentagon deems it appropriate," which he characterized as offering no meaningful concession.

Pentagon spokesman Sean Parnell, the day before Anthropic's statement was published, reiterated the department's position on social media: it would only allow all lawful use. Amodei read that as confirmation that no real agreement had been reached.

The two red lines explained

Anthropic's objections are technical as much as ethical, and Amodei was careful to draw that distinction. On domestic mass surveillance, the company's February 26 statement noted that current law already permits the government to purchase detailed records of Americans' movements, web browsing, and associations from commercial data brokers - without a warrant. The Intelligence Community has itself acknowledged this raises privacy concerns. What changes with AI, Anthropic argues, is scale and automation. "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life - automatically and at massive scale," Amodei wrote.

In other words, the legal framework predates the capability. The fourth amendment's judicial interpretation, Amodei said in the CBS interview, has simply not caught up with what AI now makes possible. He cited this as a reason Congress should act, while acknowledging Congress rarely moves fast enough to keep pace with the technology.

The second objection - fully autonomous weapons - rests on a reliability argument. Partially autonomous systems, such as those currently deployed in Ukraine, are not what Anthropic is refusing to support. The company's objection is specifically to weapons that "take humans out of the loop entirely and automate selecting and engaging targets," as described in the February 26 statement. Amodei told CBS News that current AI models have "a basic unpredictability to them that in a purely technical way we have not solved." He raised the prospect of an army of 10 million drones coordinated by a small number of people - or even one person - and questioned whether adequate accountability structures exist for that scenario.

Anthropic said it offered to work with the Department on research and development to improve the reliability of autonomous systems in a sandbox environment. According to the February 26 statement, the Department did not accept that offer.

The supply chain designation

Hegseth threatened two distinct coercive measures. The first is invoking the Defense Production Act, which gives the US president authority to deem a company's product so essential to national security that the government can compel it to meet defense needs. The second is the supply chain risk designation itself, which would restrict military contractors from using Anthropic's technology in any work touching military contracts.

Amodei pointed out in the CBS interview that these two threats are inherently contradictory: one labels Anthropic a security risk, while the other treats Claude as essential to national security. "These latter two threats are inherently contradictory," the February 26 statement reads.

As of the time of the interview, Amodei said Anthropic had received no formal legal action - only tweets from the president and from Secretary Hegseth. "When we receive some kind of formal action, we will look at it. We will understand it and we will challenge it in court," he said. Hegseth's public post, according to Amodei, claimed that any company with a military contract could not do business with Anthropic at all - a characterization Amodei disputed as inaccurate. He said the law's actual scope is narrower: it prohibits using Anthropic as part of military contracts specifically, not as a general prohibition on commercial relationships.

A former Department of Defense official who spoke to the BBC anonymously described Hegseth's grounds for either measure as "extremely flimsy."

Emil Michael, the US Undersecretary for Defense, personally attacked Amodei on social media on Thursday night, writing that the executive "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." In a CBS News interview, Michael said the uses Anthropic fears are already barred by law and Pentagon policies.

Anthropic's history with the US government

The dispute sits in sharp relief against Anthropic's broader record of engagement with the national security apparatus. In the February 26 statement, Amodei described the company as "the first frontier AI company to deploy our models in the US government's classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers." Claude, he wrote, is extensively deployed across the Department of War and other national security agencies for intelligence analysis, modeling and simulation, operational planning, and cyber operations.

Beyond deployment, Anthropic made deliberate commercial sacrifices in the name of national security. According to the February 26 statement, the company chose to forgo several hundred million dollars in revenue by cutting off use of Claude by firms linked to the Chinese Communist Party - some of which have been designated by the Department of War as Chinese Military Companies. It also shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and advocated for strong export controls on chips to maintain a democratic technological advantage.

"We believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries," Amodei wrote. That framing makes the confrontation more complicated than a simple commercial dispute. Amodei is not arguing that the US military should have less AI capability. He is arguing that the 1 percent of use cases he objects to do not actually advance national security, while the 99 percent he supports would continue unimpeded.

In the CBS interview, Amodei noted that no one on the ground, to Anthropic's knowledge, had actually run into the limits of either exception. "These are 1% of use cases and ones that we have seen no evidence on the ground have been done," he said. Uniform military officers, he added, had told him that losing access to Claude would set military operations back "six months, 12 months, maybe longer."

Anthropic's commercial position

Amodei was direct about the business implications. "Not only survive it, we're going to be fine," he said when asked whether Anthropic could weather the supply chain designation. He attributed much of the market disruption to the wording of Hegseth's public post, which he said was "designed to create fear, uncertainty, and doubt" by overstating the legal scope of the designation.

The company completed a $13 billion Series F funding round at a $183 billion valuation in September 2025, with Claude revenue growing from $1 billion to over $5 billion in eight months. That financial position gives the company considerable runway to withstand revenue disruption from government contracts while it contests any formal action. There are also competitor AI companies who would presumably accept the Pentagon's "any lawful use" terms, and Amodei acknowledged as much - suggesting the government has other options if it chooses to proceed.

Should the Department choose to move ahead with offboarding, Anthropic committed to enabling a smooth transition. "Our models will be available on the expansive terms we have proposed for as long as required," the February 26 statement reads. The company said it would help any replacement provider get up to speed without disrupting ongoing military operations.

The question of authority

Perhaps the sharpest challenge Amodei faced in the CBS interview was the most direct: why should a private company have more say over AI use in the military than the Pentagon itself? His answer had two parts. First, he argued that Anthropic, as a private company, retains the right to choose what it sells and to whom - and that the normal commercial response to disagreement would have been for the Pentagon to simply choose a different contractor. Instead, the government extended the threat beyond the Department of War to other government agencies and attempted to revoke contracts punitively across a wider sphere.

Second, he distinguished the novelty of AI from established military technologies like aircraft. A general, Amodei argued, has decades of experience understanding what aircraft can and cannot do. No one has that institutional knowledge yet for AI systems that are doubling in capability every four months. "We are the ones who see this technology on the front line," he said. That gives AI companies a form of expertise about their own products' reliability and failure modes that the Pentagon currently lacks.

The DOJ, separately, is already navigating its own questions about where Claude fits into legal frameworks. Federal prosecutors argued in February 2026 that conversations with Claude do not qualify for attorney-client privilege, a case that highlights how existing legal structures have yet to adapt to the AI era - precisely the argument Amodei made in defense of his company's position on domestic surveillance.

Why this matters for the AI industry

The standoff represents an early test of whether AI companies can maintain use-case restrictions on their models once those models become embedded in critical infrastructure. The answer has implications well beyond national security. Anthropic has proposed a broader transparency framework for frontier AI that would require major developers to publicly disclose their safety practices - a framework premised on the idea that companies, not governments alone, can meaningfully shape how powerful AI is deployed.

The company also committed in July 2025 to comply with the EU's General-Purpose AI Code of Practice, which requires documentation of risk assessment processes for the most advanced AI models. That alignment with European regulatory frameworks stands in visible contrast to the collision with the US executive branch now unfolding publicly.

For the marketing technology community, the dispute carries indirect but real significance. The question of whether AI companies can hold firm on use-case restrictions - in the face of a government customer wielding legal and financial pressure - will shape how trust in AI systems develops across all sectors. If the restrictions can be overridden through threat, then every company deploying Claude-based tools, from ad tech platforms to brand safety services, faces greater uncertainty about the consistency of the model's behavior across deployment contexts.

Amodei put the longer-term institutional solution plainly in the CBS interview: Congress needs to act to establish guardrails that let the US defeat its adversaries without undermining the values those adversaries are being defeated to protect. Until that happens, he said, private companies that develop frontier AI will remain in the position of drawing lines they cannot enforce legally - and defending them publicly when those lines are challenged.

Timeline

  • Several months before February 2026 - Tensions between Anthropic and the Pentagon begin, before it was publicly known that Claude was used in a US operation to seize Venezuelan President Nicolás Maduro, according to a source cited by the BBC
  • September 2025 - Anthropic completes a $13 billion Series F funding round at a $183 billion valuation, with Claude revenue having grown from $1 billion to over $5 billion in eight months
  • Tuesday, approximately February 24, 2026 - Defense Secretary Pete Hegseth demands and holds a meeting with Dario Amodei
  • Wednesday, February 25, 2026 - Pentagon sends updated contract language to Anthropic; company says it represents "virtually no progress" on the two safeguards at issue
  • Wednesday night, February 25, 2026 - Pentagon spokesman Sean Parnell reiterates on social media that the department will only accept "any lawful use" terms
  • February 26, 2026 - Dario Amodei publishes a formal statement on Anthropic's website refusing to remove safeguards; the statement confirms Anthropic was first frontier AI company to deploy models in US government classified networks, at National Laboratories, and to provide custom models for national security customers
  • February 26, 2026 - Secretary Hegseth posts on social media designating Anthropic a "supply chain risk" and making claims about its scope that Amodei publicly disputes as inaccurate
  • February 26, 2026 - Emil Michael, US Undersecretary for Defense, attacks Amodei personally on social media
  • February 26-28, 2026 - Amodei gives exclusive interview to CBS News, confirming no formal legal action has yet been received by Anthropic - only tweets
  • February 28, 2026 - DOJ case on Claude attorney-client privilege remains active in the Southern District of New York, adding a second legal front involving federal government and Anthropic

Summary

Who: Anthropic CEO Dario Amodei and the US Department of War (Department of Defense), led by Defense Secretary Pete Hegseth. Also involved: Emil Michael (US Undersecretary for Defense), Pentagon spokesman Sean Parnell, and unnamed uniform military officers who spoke with Amodei about operational dependence on Claude.

What: Anthropic refused to remove two safeguards from its contracts with the US military - prohibitions on using Claude for mass domestic surveillance and for fully autonomous weapons. The Pentagon responded by threatening to designate Anthropic a supply chain risk, invoke the Defense Production Act, and revoke contracts across the broader government. As of February 28, 2026, no formal legal action had been received by Anthropic, only public posts from the president and the secretary.

When: The formal confrontation became public on February 26, 2026, when Amodei published a statement on Anthropic's website. The underlying negotiations had been ongoing for several months.

Where: Washington, DC, and San Francisco. The dispute involves Anthropic's AI model Claude, which is deployed across classified government networks, the National Laboratories, and multiple national security agencies. Formal escalation occurred through social media posts and a statement on Anthropic's website.

Why: Anthropic argues that current AI systems are not reliable enough to power fully autonomous weapons, and that AI-enabled mass domestic surveillance has outpaced existing legal protections - specifically the fourth amendment's judicial interpretation and congressional legislation. The company says these two use cases, representing roughly 1 percent of military applications, undermine democratic values rather than defend them. The Pentagon argues that all lawful uses should be available, and that existing law and Pentagon policies already prohibit unlawful surveillance.

Share this article
The link has been copied!