Anthropic today filed a 48-page federal complaint in the Northern District of California, suing more than a dozen US government agencies and their heads after the Trump administration ordered every federal agency to cease using its Claude AI technology. The lawsuit, Case 3:26-cv-01996, seeks declaratory and injunctive relief on five constitutional and statutory grounds. It is one of the most direct legal challenges an AI company has ever mounted against the executive branch.

The complaint lands just ten days after President Donald Trump posted a directive on Truth Social on February 27, 2026, ordering "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." Secretary of War Pete Hegseth followed within hours on the same afternoon, posting what he described as a "final" decision on X designating Anthropic a Supply-Chain Risk to National Security under 10 U.S.C. § 3252 - a designation that, according to the complaint, has never before been applied to a domestic company.

The dispute centers on two specific restrictions that Anthropic has always maintained in its Usage Policy: Claude may not be used for lethal autonomous warfare without human oversight, and it may not be used for mass surveillance of Americans. Those two restrictions, out of an otherwise expansive policy covering classified document handling, intelligence analysis, and offensive cyber operations, became the flashpoint for a confrontation that unfolded over several months before breaking publicly at the end of February.

What led to the lawsuit

The background stretches well before the February ultimatum. According to the complaint, Anthropic first began building government infrastructure in 2023, joining the AI Safety Institute Consortium and investing in FedRAMP authorization - the US government's cloud security framework. By the time the dispute escalated publicly, Claude was reportedly the Department of War's most widely deployed frontier AI model and the only one operating on its classified systems.

In the fall of 2025, Anthropic entered negotiations for an additional agreement to place Claude on the Department's "GenAI.mil" platform. The Department asked Anthropic to remove its Usage Policy and allow "all lawful uses" of the technology. Anthropic largely agreed, the complaint states, but held firm on the two restrictions. What followed was a pattern of escalating demands. In early January 2026, Secretary Hegseth issued an internal memorandum directing the Department's procurement office to "incorporate standard 'any lawful use' language into any DoW contract" for AI services within 180 days.

On February 24, 2026, Secretary Hegseth met with Anthropic CEO Dario Amodei and presented a four-day ultimatum: comply or face either invocation of the Defense Production Act or designation as a supply chain risk. According to the complaint, Pentagon officials confirmed to the press that the meeting "was not intended to drive resolution, but rather to intimidate Anthropic." A senior Pentagon official told one news outlet that Anthropic had "until 5:01pm [Eastern] Friday to get on board."

On February 26, Dr. Amodei published a public statement. He wrote that Anthropic "cannot in good conscience accede to" the Department's request, explaining that the two restrictions address uses that are "simply outside the bounds of what today's technology can safely and reliably do." He also stated that "[o]ur strong preference is to continue to serve the Department and our warfighters - with our two requested safeguards in place."

The next day - before the 5:01 p.m. deadline had even passed - President Trump posted his directive on Truth Social. The post described Anthropic as a "RADICAL LEFT, WOKE COMPANY" and threatened "major civil and criminal consequences." Secretary Hegseth's post on X followed at 2:14 p.m. Pacific time, stating that "Anthropic's stance is fundamentally incompatible with American principles" and that "Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered."

The cascade of agency actions

The fallout moved quickly across the federal government. The General Services Administration announced on February 27, the same day as the presidential directive, that it was removing Anthropic from USAi.gov and the Multiple Award Schedule contracts - effectively closing off procurement opportunities at the federal, state, and local levels. The GSA also terminated Anthropic's "OneGov" contract, which had made Claude available across all three branches of the federal government.

On March 2, 2026, Treasury Secretary Scott Bessent announced via X that the Treasury was terminating all use of Anthropic products. The same day, the Federal Housing Finance Agency - along with Fannie Mae and Freddie Mac - announced the same. The State Department said it was switching the model powering its in-house chatbot from Claude to OpenAI. The Departments of Health and Human Services and Veterans Affairs circulated internal memoranda directing employees to stop using Anthropic services. The complaint names 17 federal agencies and 18 individual officials as defendants, including the Secretary of State, the Secretary of Energy, the Chairman of the Federal Reserve Board of Governors, the NASA Administrator, and the Chairman of the Securities and Exchange Commission.

Then, at 8:48 p.m. Eastern on March 4, the Secretary of War sent Anthropic a two-page letter - dated March 3 - formally notifying the company of the supply chain risk designation. The letter arrived almost a week after Hegseth had already announced the designation on social media. It stated that Anthropic's products present a supply chain risk, that using Section 3252 authority is "necessary to protect national security," and that "less intrusive measures are not reasonably available." The letter gave no further explanation.

Anthropic notes a striking internal contradiction. The same Secretary who described Anthropic's technology as having "exquisite capabilities" - and whose Department reportedly used Claude in a major air operation in Iran on February 28, hours after the ban was announced - simultaneously designated that same technology a national security risk requiring immediate exclusion. The complaint argues this unexplained inconsistency is itself evidence of arbitrariness under the Administrative Procedure Act.

The technical picture of Claude in government

Understanding what is at stake requires understanding what Claude actually does in military and government contexts. According to the complaint, Anthropic developed specialized Claude Gov models tailored for national security use. These models handle classified information, have enhanced proficiency in critical languages, and perform sophisticated cybersecurity analysis. Unlike the civilian version of Claude, the Gov models are less prone to refuse requests related to handling classified documents, military operations, or threat analysis.

Claude is also capable of operating in agentic configurations - completing complex, multi-step tasks with minimal ongoing human input. This includes autonomously writing and executing code, navigating online resources, and interacting with external services. The complaint describes these agentic uses as presenting "heightened risks compared to traditional, prompt-response chatbot interactions," which is precisely why the two contested restrictions exist.

Importantly, Anthropic's government-specific addendum to the Usage Policy already permitted things not available to civilian users - including analysis of lawfully collected foreign intelligence information. The line Anthropic refused to cross was not about limiting legitimate intelligence work. It was about two specific use cases: deploying weapons systems that might kill without human oversight, and collecting and analyzing information about Americans at a scale and speed that existing surveillance law was never designed to address.

The complaint is technical on this point. Large language models, it explains, generate outputs "by sampling from a probability distribution rather than by selecting outputs pursuant to predefined rules." The same model, given the same input twice, may produce different results. That inherent probabilistic behavior, Anthropic argues, makes lethal autonomous deployment not merely inadvisable but genuinely unsafe under current capabilities.

The FedRAMP authorization that Claude holds represents the highest level of cloud security certification available for unclassified and controlled unclassified information. Security clearances for Anthropic personnel remained in place throughout the dispute. The complaint points out that the FedRAMP authorization and the facility clearances "could not have been issued had any such determination been made" that Anthropic posed a supply chain risk. The supply chain designation is thus, in Anthropic's framing, contradicted by the very authorizations the government itself granted and never revoked.

The complaint advances five distinct legal claims.

The first targets the Secretarial Order and Letter under the Administrative Procedure Act, arguing that the supply chain designation exceeds the authority granted by 10 U.S.C. § 3252. That statute, Anthropic argues, covers risks of sabotage or subversion by foreign adversaries - not a vendor who declines to modify its usage terms. The government defined "adversary" in an earlier executive order to mean China, Russia, Iran, North Korea, Cuba, and Venezuela. Anthropic is a US-incorporated Delaware public benefit corporation with no ties to any of those countries.

The second claim is a First Amendment retaliation argument. Anthropic had publicly advocated for AI safety policy, backed multiple bipartisan safety bills, and expressed - both publicly and in negotiations - its views about what Claude can and cannot do safely. The complaint argues the government retaliated against those protected expressions. It cites the Supreme Court's 2024 ruling in National Rifle Association v. Vullo, 602 U.S. 175 (2024), for the proposition that the government "may not employ the power of the State to punish or suppress disfavored expression."

The third claim is that the Presidential Directive is ultra vires - outside any statutory authority Congress has granted the executive. The comprehensive federal procurement framework does not include a mechanism for the president to personally order the mass cancellation of all contracts with a specific private company via a social media post.

The fourth claim is a Fifth Amendment due process violation. Anthropic argues it has protected property interests in existing contracts and liberty interests in its reputation and ability to operate. The government stripped those interests without any prior notice, written findings, or meaningful opportunity to be heard. The complaint quotes a recent district court: if the government must provide due process before terminating its own contractor, "surely it must do the same before blacklisting an entity from all its contractors' Rolodexes."

The fifth claim is an additional APA count focused on the actions of non-Department agencies. The APA prohibits sanctions or orders outside "jurisdiction delegated to the agency and as authorized by law." None of the statutes governing the GSA, the Treasury, HHS, or the State Department, the complaint argues, authorize them to issue sweeping sanctions against a private company in response to a presidential social media post.

What Anthropic is asking for

The relief sought is substantial. The complaint asks the court to declare each challenged action unlawful and to permanently enjoin all defendants from implementing, applying, or enforcing them. It also asks the court to require defendants to rescind all implementing guidance and to issue new guidance to their officers and contractors directing them to disregard the banned actions. Attorneys' fees are also requested.

The complaint was filed by WilmerHale attorneys including Kelly Dunbar, Joshua Geltzer, Susan Hennessey, and Michael Mongan, among others.

Why this matters for the marketing and tech community

For businesses and marketing professionals using Claude-based tools, the case raises immediate practical questions. According to the complaint, dozens of private companies contacted Anthropic after the Challenged Actions seeking clarity on their own obligations - including customers, cloud providers, and investors. One federal contractor indicated it might suspend custom Claude applications or remove Claude from existing deployments.

Anthropic has been growing rapidly - the company reached a $183 billion valuation in September 2025 with over $5 billion in annualized revenue and more than 300,000 business customers. The government's supply chain designation carries weight well beyond the federal sector. As the complaint notes, that label "will follow Anthropic into every future procurement relationship across the federal government and with federal contractors, not to mention relationships with states and local governments and customers in other sectors."

The case also bears directly on how AI companies can maintain usage policies when their technology becomes embedded in critical infrastructure. If usage restrictions can be overridden through political pressure, the consistency of behavior that enterprise customers rely on becomes unpredictable. The DOJ has separately been navigating where Claude fits into legal frameworks, with federal prosecutors arguing in February 2026 that conversations with Claude do not qualify for attorney-client privilege - a ruling with wide implications for professionals using AI tools.

The timing also intersects with Anthropic's active copyright litigation. The company reached a $1.5 billion settlement in September 2025 over pirated books used in training data, and faces a separate $3 billion lawsuit from music publishersfiled in January 2026. A company simultaneously navigating multiple billion-dollar legal exposures while fighting executive branch retaliation represents an extraordinary degree of legal risk concentration.


Timeline

  • March 2023 - Anthropic publicly launches Claude; the company begins building government infrastructure and pursuing FedRAMP authorization
  • November 7, 2024 - Anthropic and Palantir announce partnership to bring Claude to AWS for US government intelligence and defense operations
  • 2024 - Anthropic becomes the first frontier AI lab to evaluate a model in a Top Secret classified environment with the Department of Energy; Claude is deployed on the Department of War's classified systems
  • July 14, 2025 - CDAO awards agreements to Anthropic, Google, OpenAI, and xAI, each with a $200 million ceiling value, as part of its commercial-first AI adoption approach
  • September 2, 2025 - Anthropic completes a $13 billion Series F funding round at a $183 billion valuation; annual revenue exceeds $5 billion
  • Fall 2025 - Anthropic and the Department of War begin negotiations over deploying Claude on the "GenAI.mil" platform; the Department requests removal of the Usage Policy in exchange for an "all lawful use" framework
  • Early January 2026 - Secretary Hegseth issues a memorandum directing the Department's procurement office to incorporate "any lawful use" language into all AI contracts within 180 days
  • January 12, 2026 - Anthropic launches Cowork, extending Claude Code capabilities to non-developers
  • January 28, 2026 - Music publishers file a $3 billion copyright lawsuit against Anthropic over alleged BitTorrent piracy
  • February 5, 2026 - Anthropic announces Claude will remain ad-free; launches Super Bowl campaign targeting OpenAI
  • February 6, 2026 - DOJ argues Claude conversations lack attorney-client privilege in securities fraud case
  • February 16, 2026 - Axios reports that the Pentagon is threatening to designate Anthropic a supply chain risk; a Pentagon source says Anthropic will "pay a price"
  • February 19, 2026 - Pentagon CTO publicly describes Anthropic's usage restrictions as "not democratic"
  • February 24, 2026 - Secretary Hegseth meets with Dario Amodei; presents a four-day ultimatum to remove safeguards or face Defense Production Act invocation or supply chain designation
  • February 26, 2026 - Dario Amodei publishes a public statement refusing to remove the two AI safeguards, saying Anthropic "cannot in good conscience accede" to the demands
  • February 27, 2026, 12:47 p.m. PT - President Trump posts a directive on Truth Social ordering every federal agency to immediately cease using Anthropic's technology
  • February 27, 2026, 2:14 p.m. PT - Secretary Hegseth posts a "final" decision designating Anthropic a Supply-Chain Risk to National Security; bars military contractors from commercial activity with Anthropic; grants a six-month phase-out period for the Department of War
  • February 27, 2026 - GSA removes Anthropic from USAi.gov, the Multiple Award Schedule, and terminates the OneGov contract; HHS internally disables enterprise Claude
  • February 28, 2026 - Reports emerge that the Department used Anthropic tools in a major air operation in Iran, hours after the ban announcement
  • March 2, 2026 - Treasury Secretary Bessent and the Federal Housing Finance Agency announce termination of all Anthropic use; State Department switches its chatbot from Claude to OpenAI
  • March 3, 2026 - The Secretary of War signs a formal supply chain risk letter to Anthropic; Defense One reports that Pentagon officials confirm the designation was "ideologically driven" with "no evidence of supply-chain risk"
  • March 4, 2026, 8:48 p.m. ET - The supply chain risk letter is received by Anthropic, nearly a week after the social media announcement
  • March 9, 2026 - Anthropic files Case 3:26-cv-01996 in the Northern District of California, seeking declaratory and injunctive relief on five counts

Summary

Who: Anthropic PBC, a Delaware public benefit corporation headquartered in San Francisco, filed the lawsuit. The defendants are 17 US federal agencies and 18 individual officials including Secretary of War Pete Hegseth, Treasury Secretary Scott Bessent, Secretary of State Marco Rubio, and the heads of the GSA, HHS, NASA, the SEC, and the Federal Reserve, among others.

What: Anthropic filed a 48-page complaint for declaratory and injunctive relief challenging five actions: a presidential directive ordering all federal agencies to cease using Claude; a Secretarial Order designating Anthropic a Supply-Chain Risk to National Security; a formal supply chain risk letter dated March 3, 2026; individual agency actions implementing the directive; and the GSA's termination of the OneGov contract. Anthropic alleges violations of the APA, the First Amendment, the Fifth Amendment's due process clause, and ultra vires executive action.

When: The complaint was filed on March 9, 2026. The underlying confrontation became public on February 27, 2026, when the Presidential Directive and Secretarial Order were posted on social media, though negotiations had been ongoing since the fall of 2025.

Where: The case was filed in the US District Court for the Northern District of California. The dispute involved Anthropic's Claude AI models, which were deployed across classified networks, National Laboratories, and multiple national security agencies at the time of the ban.

Why: Anthropic refused to remove two restrictions from its Usage Policy that prohibit Claude from being used for lethal autonomous warfare without human oversight and for mass surveillance of Americans. According to the complaint, the company maintains these restrictions because Claude has not been trained or tested for those uses and lacks the reliability required to perform them safely. Anthropic argues the government's response - a government-wide ban, a supply chain designation, and contract terminations affecting hundreds of millions of dollars in revenue - constitutes unconstitutional retaliation against protected speech and petitioning activity, and exceeds the legal authority granted by Congress.

Share this article
The link has been copied!