A federal judge today granted a preliminary injunction blocking the US government from blacklisting Anthropic, ruling that the Department of War lacked legal authority to designate the AI company a national security supply chain risk after a dispute over how its Claude model could be used by the military.
US District Judge Rita F. Lin of the Northern District of California signed the order on March 26, 2026, in case 3:26-cv-01996, issuing a three-page preliminary injunction that bars all defendant agencies - and their agents, officers, employees, and representatives - from implementing, applying, or enforcing the February 27, 2026 presidential directive that ordered every federal agency to stop using Anthropic's technology. The order simultaneously halts the supply chain risk designation issued by Secretary of War Pete Hegseth on that same date. The injunction is stayed for seven days from its issuance, giving the government a brief window to seek emergency relief from an appeals court.
"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote in her opinion accompanying the order.
The dispute: two restrictions, one confrontation
The legal fight traces back to a partnership that had, until recently, been productive. According to the Ars Technica report on the ruling, the Department of War began using Anthropic's Claude in March 2025 and had been deploying it for roughly a year without raising any concerns about Anthropic's usage terms posing a national security risk. The department publicly praised Anthropic as a partner, put it through rigorous vetting, and was planning to expand the relationship.
The breakdown came when the Department sought to deploy Claude on a military platform called GenAI.mil and demanded that Anthropic remove two specific restrictions from its usage policy: a prohibition on using Claude for mass surveillance of Americans and a prohibition on its use in fully autonomous lethal weapons. Anthropic agreed to most of the Pentagon's requests but held firm on those two points. As PPC Land reported when Anthropic filed the lawsuit, the company's position was that its own testing had found it could not guarantee that Americans' civil rights would not be infringed if Claude were used for those purposes.
On February 24, 2026, Secretary Hegseth met with Anthropic CEO Dario Amodei and presented a four-day ultimatum: accept the "any lawful use" contract language or face designation as a supply chain risk or invocation of the Defense Production Act. The meeting, Pentagon officials later confirmed to the press, "was not intended to drive resolution, but rather to intimidate Anthropic," according to the complaint.
Amodei published a public statement on February 26, 2026 explaining that Anthropic "cannot in good conscience accede to" the Department's demands. The company noted that the two restrictions addressed uses that were "simply outside the bounds of what today's technology can safely and reliably do." What Anthropic offered instead was a clear alternative: if the government disliked those terms, Anthropic would understand if another vendor was chosen.
The Pentagon did not take that offer. Instead, on February 27, 2026 at 2:14 p.m. PT, Hegseth posted what he described as a "final" decision on X, designating Anthropic a Supply-Chain Risk to National Security and directing that no contractor, supplier, or partner doing business with the US military could conduct commercial activity with Anthropic. President Trump followed with a post on Truth Social calling Anthropic a "radical left, woke company" putting "selfishness" above national security. Neither post cited any legal authority for the actions.
A cascade of government actions
The consequences followed rapidly. According to the preliminary injunction order and the complaint docket filed with the court, the General Services Administration removed Anthropic from USAi.gov and the Multiple Award Schedule and terminated the OneGov contract on the same day as Hegseth's post. HHS internally disabled enterprise Claude. On March 2, 2026, Treasury Secretary Bessent and the Federal Housing Finance Agency announced termination of all Anthropic use; the State Department switched its chatbot from Claude to OpenAI.
Three trade deals were promptly cancelled after the blacklisting, while other potential partners delayed talks, according to Lin's findings. The company demonstrated it was already suffering irreparable harm that would only worsen - including potentially losing billions in private and government contracts it had expected to sign over the next five years.
On March 3, 2026, Hegseth formally signed a supply chain risk letter under 10 U.S.C. § 3252. That letter reached Anthropic on March 4, nearly a week after the social media announcement had already caused commercial damage. Anthropic filed suit on March 9, 2026, in the Northern District of California, naming more than a dozen federal agencies and 18 individual officials as defendants.
The court proceedings
The docket record for case 3:26-cv-01996 shows the case moved with unusual speed. On the day the complaint was filed, March 9, Judge Lin set a status conference for the following day via Zoom. That conference, lasting 21 minutes according to the court minutes, set the preliminary injunction hearing for March 24, 2026 in Courtroom 12, 19th Floor, in San Francisco.
The case attracted an extraordinary volume of amicus briefs. By March 13, 2026, the court had accepted briefs from Former Service Secretaries and Retired Senior Military Officers, the Cato Institute, the Electronic Frontier Foundation, the Foundation for Individual Rights and Expression, the First Amendment Lawyers Association, Microsoft Corporation, the American Federation of Government Employees, Industry Trade Associations, ACT The App Association, Catholic Moral Theologians and Ethicists, Former Senior National Security Government Officials, the Abolitionist Law Center, and the Freedom Economy Business Association. Personal employees of OpenAI and Google, filing in their individual capacities, also submitted an amicus brief.
The retired military officers' brief carried particular weight. According to Lin's opinion, those leaders warned that allowing the Hegseth directive to stand "will materially detract from military readiness and operational safety" - a direct rebuttal to the Department's claim that the injunction would disrupt US military operations.
The hearing on March 24 lasted 1 hour and 35 minutes. Anthropic's legal team included Michael Mongan, Joshua Geltzer, Lauren Moxley Beatty, Susan Hennessey, Sonika Data, Brian Israel, Jeffrey Bleich, and Aparna Sridhar. The government's team was represented by Eric Hamilton and Christian Dibblee. The court directed Anthropic to file an additional declaration by 6 p.m. that day, with the government's response due by 6 p.m. the following day. Judge Lin then took the matter under submission.
The "I don't know" moment
Lin's written opinion laid out the government's legal case with notable precision - and found it lacking at every turn. During oral arguments, a government lawyer admitted that he was not aware of any statute that gave Secretary Hegseth the authority to issue the prohibition and agreed that the social media statement had "absolutely no legal effect at all," according to the opinion. When the judge asked why Hegseth made a public statement that had no legal effect and did not reflect the Department's immediate intent, the government's counsel stated, "I don't know."
The supply chain risk designation under 10 U.S.C. § 3252 was designed for foreign intelligence agencies, terrorists, and other hostile actors. According to Lin, that designation had never previously been applied to a domestic company. Yet the Department's own records showed it designated Anthropic for that label because of its "hostile manner through the press" - not because of any demonstrated technical security risk.
The only substantive national security argument the government advanced was that Anthropic could theoretically update its products in a way that would compromise government systems. Lin dismissed that claim. Any IT provider could potentially introduce similar risks, she noted. More concretely, Anthropic had presented unrebutted evidence that it would be impossible to force updates on or otherwise control the government's systems. The simplest solution - terminating the contract, which Anthropic itself had agreed would be understandable - would foreclose any such risk entirely.
Hegseth had also publicly described Anthropic's technology as having "exquisite capabilities" while simultaneously designating it a grave national security threat - and, according to the complaint, the Department reportedly used Claude in a major air operation in Iran on February 28, the day after the ban was announced. Lin found this internal contradiction itself indicative of the designation's arbitrariness.
First Amendment retaliation: the core legal finding
The judge examined three legal theories: whether the government violated Anthropic's First Amendment rights, denied it due process, or acted arbitrarily and capriciously under the Administrative Procedure Act. At this stage, she concluded Anthropic had shown enough to demonstrate it was likely to succeed on all three.
The First Amendment analysis was the most pointed. Lin found that the Department of War was not authorized to "designate a domestic vendor a supply chain risk simply because a vendor publicly criticized DoW's views about the safe uses of its system." The record showed that the designation was issued because of Anthropic's "hostile manner through the press," not because of any objective security assessment. Hegseth's own public statements, according to Lin, "expressly tied Anthropic's punishment to its attitude and rhetoric in the press."
That, the judge wrote, was "classic First Amendment retaliation." More broadly, she warned that government officials labeling vendors as "adversaries" for speaking out about safety concerns could "chill open deliberation" and "professional debate" among those "best positioned to understand AI technology" and its potential for "catastrophic misuse."
Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, which filed an amicus brief in support of Anthropic, described the decision as having implications beyond the immediate case. According to NPR, Huddleston said the ruling "is really diving into some of those classic questions of ensuring that there's not retaliation against a company or an individual for exercising their First Amendment rights, and also ensuring that when such significant decisions are made, things that could be potentially crippling to a business, that the adequate due process is followed."
The government's response and next steps
The Under Secretary of War Emil Michael, in statements posted to X, called Lin's order "a disgrace" and claimed it contained "factual errors" resulting from what he characterized as the judge's rush to issue the injunction. He emphasized that the supply chain risk designation would remain in effect during the seven-day administrative stay. The government did not file an immediate request for comment through official channels, but indicated it was conducting an audit to assess whether any security risks could justify the designation.
The preliminary injunction order is explicit about what it does and does not require. The order restores the status quo as it existed on February 27, 2026, before the presidential directive and Hegseth directive were issued. It does not require the Department of War to use Anthropic's products or services. It does not prevent the Department from transitioning to other AI providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions. Defendants must file a status report by April 6, 2026, certifying compliance with the order. Anthropic must pay a bond of $100 to the clerk of the court by the same date.
Anthropic's response was measured. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," the company's spokesperson said, as quoted by NPR.
Why this matters for technology and marketing professionals
The Anthropic case has implications that extend well beyond a single AI company's contract dispute. The legal question at its center - whether a private technology company can set limits on how its products are used, and whether the government can punish that company for publicly asserting those limits - will shape how AI vendors of all kinds negotiate with government customers going forward.
For marketing and advertising technology professionals, the ruling matters on several levels. Anthropic's Claude is extensively used in enterprise and marketing contexts. The company completed a $13 billion Series F funding round at a $183 billion valuation in September 2025, with Claude revenue growing from $1 billion to over $5 billion in eight months, as reported by PPC Land in its coverage of the initial confrontation. A sustained blacklisting - cutting off any business that also works with the Pentagon - would have cascaded through the technology supply chain in ways that could have affected marketing and advertising technology companies with government exposure.
The First Amendment dimension also has implications for enterprise technology generally. If agencies can designate domestic vendors as security threats in response to public criticism of government procurement positions, the chilling effect could suppress legitimate technical discourse about AI safety, privacy, and product capabilities across the industry.
The case is not resolved. The preliminary injunction covers only the period until the court can decide the merits of the underlying claims. The government has seven days from March 26 to seek an emergency stay from the Ninth Circuit Court of Appeals. The litigation will continue.
Timeline
- March 2025 - The Department of War begins using Anthropic's Claude; deployment proceeds for roughly a year without security concerns being raised
- Fall 2025 - Anthropic enters negotiations to place Claude on the Department's "GenAI.mil" platform; the Department asks Anthropic to remove its usage policy and allow "any lawful use"
- September 2025 - Anthropic closes a $13 billion Series F funding round at a $183 billion valuation
- Early January 2026 - Secretary Hegseth issues an internal memorandum directing procurement offices to incorporate "any lawful use" language into all AI contracts within 180 days
- February 16, 2026 - Axios reports the Pentagon is threatening to designate Anthropic a supply chain risk
- February 19, 2026 - Pentagon CTO publicly describes Anthropic's usage restrictions as "not democratic"
- February 24, 2026 - Hegseth meets with CEO Dario Amodei; presents a four-day ultimatum to remove AI safeguards or face designation as a supply chain risk
- February 26, 2026 - Amodei publishes a public statement refusing the Pentagon's demands; says Anthropic "cannot in good conscience accede" to removing the two restrictions
- February 27, 2026, 2:14 p.m. PT - Hegseth posts on X designating Anthropic a "Supply-Chain Risk to National Security"; Trump posts on Truth Social calling Anthropic a "radical left, woke company"; GSA removes Anthropic from federal purchasing schedules and terminates the OneGov contract; HHS internally disables enterprise Claude
- February 28, 2026 - Reports emerge that the Department used Anthropic tools in a major air operation in Iran, hours after the ban announcement
- March 2, 2026 - Treasury Secretary Bessent and the Federal Housing Finance Agency announce termination of all Anthropic use; State Department switches its chatbot from Claude to OpenAI
- March 3, 2026 - Hegseth signs a formal supply chain risk letter to Anthropic under 10 U.S.C. § 3252; Defense One reports Pentagon officials confirm the designation was "ideologically driven" with "no evidence of supply-chain risk"
- March 4, 2026 - The formal supply chain risk letter is received by Anthropic, nearly a week after the social media announcement
- March 9, 2026 - Anthropic files Case 3:26-cv-01996 in the Northern District of California, naming more than a dozen federal agencies and 18 individual officials as defendants; Judge Rita F. Lin is assigned; a status conference is set for the following day
- March 10, 2026 - Status conference held via Zoom (21 minutes); preliminary injunction hearing set for March 24; amicus brief filings begin from Microsoft, former military leaders, and civil liberties organizations
- March 12-13, 2026 - Court accepts amicus briefs from retired military officers, Cato Institute, Electronic Frontier Foundation, Microsoft, American Federation of Government Employees, and others
- March 17, 2026 - Government files its opposition to the preliminary injunction motion
- March 20, 2026 - Anthropic files its reply; court sets deadline for government evidence responses by March 24 at 9:00 a.m.
- March 24, 2026 - Preliminary injunction hearing held for 1 hour and 35 minutes in Courtroom 12, 19th Floor, San Francisco; court takes matter under submission
- March 26, 2026 - Judge Lin issues the preliminary injunction order (Document 135, Case 3:26-cv-01996), blocking the presidential directive and Hegseth directive; the order is stayed for seven days
- March 27, 2026 - Government's Under Secretary Emil Michael calls the ruling "a disgrace"; court issues additional procedural orders; defendants have until April 6, 2026 to file a compliance status report
Summary
Who: Anthropic PBC, a San Francisco-based AI company, is the plaintiff. The defendants are the US Department of War, Secretary of War Pete Hegseth, President Trump (through the presidential directive), and more than a dozen other federal agencies and officials including Treasury Secretary Scott Bessent, Secretary of State Marco Rubio, and the heads of the GSA, HHS, NASA, the SEC, and the Federal Reserve.
What: US District Judge Rita F. Lin of the Northern District of California granted a preliminary injunction blocking the Trump administration from enforcing a February 27, 2026 presidential directive ordering all federal agencies to stop using Anthropic's Claude AI, and from implementing a separate Department of War directive designating Anthropic a "supply chain risk to national security." The judge found the actions likely constituted First Amendment retaliation, denied Anthropic due process, and were arbitrary and capricious under the Administrative Procedure Act.
When: The preliminary injunction was signed and filed on March 26, 2026. The underlying dispute became public on February 26-27, 2026. Anthropic filed its lawsuit on March 9, 2026. The order is stayed for seven days, with compliance status reports due April 6, 2026.
Where: The case is pending in the US District Court for the Northern District of California, Case 3:26-cv-01996-RFL, before Judge Rita F. Lin. The dispute originated in Washington, DC, in negotiations over deploying Claude on the Department of War's GenAI.mil platform.
Why: The confrontation began when Anthropic refused to remove two usage restrictions from its military contracts - a prohibition on using Claude for mass surveillance of Americans and a prohibition on its use in fully autonomous lethal weapons - on the grounds that current AI technology cannot reliably perform those functions without risk of harm. The Department of War responded with escalating threats and ultimately designated Anthropic a supply chain risk. Anthropic sued, arguing that the designation was retaliation for publicly criticizing the government's position, rather than a genuine security assessment. The judge agreed at the preliminary injunction stage.