Elon Musk's artificial intelligence company xAI today lost a key legal battle in its fight against California's AI training data transparency law, as a federal judge denied the company's motion for a preliminary injunction that would have halted enforcement of Assembly Bill 2013 while the lawsuit proceeds. The ruling, issued March 4, 2026, by United States District Judge Jesus G. Bernal of the Central District of California, found that xAI had failed to demonstrate the likelihood of success on the merits required to obtain such extraordinary relief.

The decision does not end the case. It is, however, a significant early setback for xAI and a marker of how courts may approach constitutional challenges to AI governance legislation as state-level regulation of artificial intelligence continues to expand.

What AB 2013 requires

California's Governor signed Assembly Bill 2013 into law in September 2024. The statute, formally titled "Artificial Intelligence Training Data Transparency," requires developers of generative AI systems and services that are publicly available to Californians to post documentation on their websites describing the data used to train those systems. According to the court order, that documentation must include a high-level summary addressing at least 12 enumerated topics.

Those topics are specific. According to the statute as cited in the order, developers must disclose the sources or owners of datasets, a description of how the datasets serve the intended purpose of the AI system, the number of data points included - which may be expressed in general ranges for dynamic datasets - the types of data points, whether datasets include copyright-protected material or are in the public domain, whether datasets were purchased or licensed, whether personal information or aggregate consumer information is present, any cleaning or processing applied to the data, the time period during which the data was collected, the dates datasets were first used during development, and whether the system uses synthetic data generation. Three categories of models are exempt: those used solely for cybersecurity and integrity purposes, those used solely for aircraft operations in the national airspace, and those developed exclusively for national security, military, or defense purposes and made available only to federal entities.

The law took effect on January 1, 2026. Its retroactive scope covers AI models released since 2022, even those no longer in regular use.

The lawsuit and the motion

xAI filed its complaint on December 29, 2025, naming California Attorney General Rob Bonta as defendant. The company sought a preliminary injunction - a court order that would have suspended enforcement of AB 2013 pending the outcome of the full case. According to the court record, xAI filed the motion for preliminary injunction on January 16, 2026. California filed its opposition on February 2, and xAI replied on February 9. The court held oral argument on February 23, after which it requested a supplemental filing from the state, received on February 27. xAI responded to that filing on March 2.

The company raised three constitutional objections. First, that AB 2013 violates the Takings Clause of the Fifth Amendment by compelling disclosure of trade secrets without compensation. Second, that it violates the First Amendment by compelling speech. Third, that the statute is unconstitutionally vague.

Notably, xAI had already published a limited disclosure on December 30, 2025 - one day after filing the lawsuit - which it described as a high-level disclosure that does not reveal its trade secrets. The company nonetheless argued that fuller compliance would be required under the statute's terms and that this broader disclosure would expose proprietary information.

PPC Land covered the original lawsuit filing in detail, including xAI's characterisation of the law as an "unconstitutional trade-secrets-destroying disclosure regime."

On standing: the one thing xAI won

Before reaching the constitutional questions, the court considered whether xAI even had standing to bring the case. The state had argued it lacked standing. The court disagreed.

According to the order, xAI's partial compliance on December 30, 2025, combined with its evident desire to disclose no further, established that it faces a credible risk of enforcement. The state had declined to explicitly disavow any intention to pursue enforcement action against xAI, which the Ninth Circuit holds to be sufficient to establish a credible threat. The court found all three standing requirements satisfied: injury in fact, causal connection to the defendant's conduct, and redressability.

This procedural win gives xAI the right to continue the case on its merits. It did not, however, translate into success on the preliminary injunction.

The Takings Clause claim: too abstract

To succeed on its Takings Clause argument, xAI needed to show that the datasets and related information AB 2013 would compel it to disclose qualify as trade secrets under either California or federal law. California law defines a trade secret as information that is valuable because it is unknown to others and that the owner has taken steps to keep secret. Federal law under the Defend Trade Secrets Act requires that the owner has taken reasonable measures to maintain secrecy and that the information derives independent economic value from not being generally known.

The court found xAI's pleadings too general to carry this burden at this stage. According to the order, the company's complaint offered frequent abstraction and hypotheticals rather than specific facts about its own practices. The court pointed to passages in xAI's complaint that acknowledge "many AI companies will have overlap in the datasets they use" while arguing that the differences between datasets are competitively significant - without actually alleging that xAI uses datasets that are unique, meaningfully larger or smaller than competitors' datasets, or that its cleaning methods are distinctive.

The Ninth Circuit requires plaintiffs in trade secret cases to identify the secrets with sufficient particularity to separate them from general knowledge in the field. xAI had not done so. The court found it "not lost on the Court" that datasets could hypothetically constitute trade secrets, but concluded that xAI's approach to generalised pleading prevented a determination in its favour at this stage.

The First Amendment claim: commercial speech under intermediate scrutiny

The First Amendment analysis was the most legally complex portion of the order, and the one where xAI came closest to succeeding - though still fell short.

xAI argued that AB 2013 is a content-based speech regulation requiring strict scrutiny, which it asserts the law fails. The state countered that the law regulates commercial speech, subject only to intermediate scrutiny under the Central Hudson test.

The court worked through the content-based question first. It found that AB 2013 is best understood as a direct disclosure requirement - compelling a private entity to communicate information directly to the public - rather than a governmental reporting requirement. Under recent Ninth Circuit precedent from Pharm. Research & Manufacturers of Am. v. Stolfi, decided in 2025, such direct disclosure requirements are generally entitled to strict scrutiny unless they qualify as commercial speech.

The court then examined whether AB 2013 regulates commercial speech. The Ninth Circuit's Bolger factors - whether speech is an advertisement, refers to a particular product, and is economically motivated - are guideposts here, though not dispositive outside the advertising context. The court drew on the Stolfi decision's broader approach, finding that AB 2013 serves the same functional purpose as commercial speech: it provides information to parties in actual or potential commercial transactions about those transactions. According to the order, the law gives the public information necessary to evaluate whether to use a particular AI model relative to competitors' offerings. The California legislature itself described the goal as allowing Californians to make informed decisions about the AI systems they purchase and engage with.

The court rejected xAI's argument that the law's purpose was to identify datasets "riddled with implicit and explicit biases" - noting that this language came from the California Labor Federation's advocacy documents rather than the statute's text or any legislator's statement. Nothing in the statute, the court found, suggests an intent to regulate model outputs by targeting training data; the disclosure requirement operates as a consumer information mechanism.

Having found that AB 2013 likely implicates commercial speech, the court applied the Central Hudson intermediate scrutiny standard rather than the stricter Zauderer test xAI had urged. Under Central Hudson, the state must show the law directly advances a substantial governmental interest by means no more extensive than necessary. The court found that xAI's argument - that no consumer could make a useful evaluation of AI models from information about training datasets - strained credulity. It acknowledged, however, that the record remains underdeveloped on whether the state's approach is more extensive than necessary.

The bottom line: xAI demonstrated a "distinct possibility" of prevailing on its First Amendment claim. That is not the same as a likelihood of success, which is what a preliminary injunction requires as a threshold matter.

The vagueness claim: premature

On the vagueness challenge, the court found the record insufficiently developed to rule in xAI's favour. xAI argued that terms like "dataset" and "data point" are undefined in the statute and that the non-exhaustive list of required disclosures leaves companies without adequate guidance on what full compliance looks like.

The court was not persuaded. According to the order, xAI uses the word "dataset" throughout its own complaint with apparent ease, which undermines the argument that the term is ambiguous by industry standards. The Ninth Circuit has also held that criteria are not vague simply because they fail to delineate an exhaustive set of factors. A non-exhaustive list still provides fair warning of what is required.

The court acknowledged that some questions - such as whether the statute covers licensed or incorporated systems developed by third parties - may eventually require determination. But xAI had not alleged it actually operates such systems, making the challenge too abstract at this stage for an as-applied claim.

What comes next

The case continues. xAI can develop its record through discovery and potentially bring a renewed motion or proceed to summary judgment or trial. The court's order suggests several areas where a fuller evidentiary record could change the analysis: whether xAI's specific datasets and cleaning methods qualify as trade secrets in a particularised sense, and whether AB 2013's disclosure regime is more extensive than necessary to achieve the state's consumer transparency objective.

For the marketing and advertising technology community, the case matters on at least two levels. AI systems are increasingly embedded in advertising platforms, audience targeting tools, content generation pipelines, and measurement infrastructure. Transparency requirements about the data underlying those systems - who trained them, on what, under what conditions - would affect how vendors disclose capabilities and how brands and agencies evaluate the AI tools they deploy. The Senate's TRAIN Act, introduced in July 2025, reflects a parallel effort at the federal level to require AI developers to disclose copyright use in training datasets.

Whether AB 2013 survives constitutional challenge will depend on how the record develops. But the court's finding that the law likely regulates commercial speech - and therefore faces only intermediate rather than strict scrutiny - is a meaningful early indication of how difficult it will be to overturn such disclosure requirements entirely.

The ruling is the latest episode in a longer string of xAI litigation. The company filed a trade secret lawsuit against a former engineer in August 2025, followed days later by an antitrust suit against Apple and OpenAI. California's own AI regulatory landscape has grown considerably, including a companion chatbot transparency law signed in October 2025 and broader EU-level moves to align data protection frameworks with AI development needs, as seen in proposed GDPR amendments.

Timeline

  • September 28, 2024 - California Governor signs Assembly Bill 2013 into law, requiring AI developers to publicly disclose training data documentation.
  • January 1, 2026 - AB 2013 takes effect, covering generative AI systems publicly available to Californians and retroactively applying to models developed since 2022.
  • December 29, 2025 - xAI files federal complaint in the Central District of California against Attorney General Rob Bonta, challenging AB 2013 as unconstitutional.
  • December 30, 2025 - xAI publishes a limited high-level disclosure on its website, described as not revealing its trade secrets.
  • January 16, 2026 - xAI files its motion for preliminary injunction seeking to block enforcement.
  • February 2, 2026 - California files opposition to the motion.
  • February 9, 2026 - xAI files its reply.
  • February 23, 2026 - Court holds oral argument and requests supplemental state filing.
  • February 27, 2026 - State submits supplemental filing.
  • March 2, 2026 - xAI responds to state's supplemental filing.
  • March 4, 2026 - Judge Jesus G. Bernal issues order denying preliminary injunction in case CV 25-12295 JGB (SSCx).

Summary

Who: X.AI LLC (xAI), the artificial intelligence company founded by Elon Musk, and Rob Bonta, Attorney General of the State of California. The presiding judge is United States District Judge Jesus G. Bernal of the Central District of California.

What: A federal court denied xAI's motion for a preliminary injunction that sought to block enforcement of California Assembly Bill 2013, a law requiring AI developers to publicly disclose documentation about the datasets used to train their generative AI systems. The court found that xAI had standing but had not demonstrated a likelihood of success on any of its three constitutional claims - Takings Clause, First Amendment, and vagueness.

When: The order was issued on March 4, 2026. The underlying lawsuit was filed December 29, 2025. The statute itself was signed in September 2024 and took effect January 1, 2026.

Where: United States District Court for the Central District of California, Case No. CV 25-12295 JGB (SSCx).

Why: xAI argues that AB 2013 forces it to disclose trade secrets embedded in its dataset curation and model training processes, and that this compelled disclosure violates the Fifth Amendment's Takings Clause and the First Amendment's protection against compelled speech. The state of California argues the law serves consumer transparency interests by giving the public information needed to evaluate AI systems in a competitive market. The court found xAI's trade secret claims too generalised and its First Amendment claims insufficiently strong for the threshold showing a preliminary injunction requires, while leaving open the possibility that a more developed record could produce different results.

Share this article
The link has been copied!