xAI sues California over law forcing AI firms to reveal training secrets
Elon Musk's xAI files federal lawsuit challenging California's AB 2013, arguing the transparency law violates constitutional protections and hands competitors a roadmap to replicate proprietary AI models.
Elon Musk's artificial intelligence company xAI filed a federal lawsuit on December 29, 2025, challenging California's Assembly Bill 2013, a transparency law requiring AI developers to publicly disclose detailed information about the datasets used to train their models. The complaint, filed in the United States District Court for the Central District of California, argues that the law violates constitutional protections for trade secrets and compels speech in violation of the First Amendment.
According to the 55-page complaint, xAI characterizes AB 2013 as an "unconstitutional trade-secrets-destroying disclosure regime" that threatens the AI industry by forcing companies to reveal confidential information worth billions of dollars. The lawsuit names California Attorney General Rob Bonta as the defendant in his official capacity, seeking declaratory and injunctive relief to prevent enforcement of the law scheduled to take effect on January 1, 2026.
The legal challenge arrives amid broader tensions between AI companies and regulatory authorities over training data transparency. California has intensified its focus on AI regulation throughout 2025, implementing requirements across healthcare, advertising, and consumer protection sectors that reflect growing legislative interest in oversight of artificial intelligence systems.
Trade secrets at the core of AI development
xAI's complaint centers on the economic value of dataset information, which the company argues derives entirely from secrecy. The lawsuit explains that AI companies invest substantial resources identifying high-quality data sources that competitors are not using to train their models. If one developer's AI model receives unique training based on sources of data that other models have not received, it will likely have a competitive advantage.
The complaint describes how xAI engineers dedicated substantial time from March and April 2023 to acquiring datasets from various sources across the Internet to develop Grok, the company's flagship AI model. According to the filing, xAI first succeeded in an initial limited public release of Grok-1 in November 2023, followed by the full public release in March 2024. The company subsequently launched Grok-2 in August 2024, Grok-3 in February 2025, and Grok-4 in July 2025.
"Unsurprisingly, then, one of the keys to an AI developer's success is its ability to find information and sources that its competitors do not have," the complaint states. "Accordingly, businesses like xAI make significant efforts to safeguard information about the datasets they have acquired: their sources, the amount of data they hold, the types of data included in those datasets, and their role in the overall process of developing fully functioning AI models."
The lawsuit argues that if competitors could access information about which datasets xAI uses, they could immediately move to acquire those sources to ensure their models were equally effective. The filing notes that if OpenAI were to discover that xAI was using an important dataset to train its models that OpenAI was not, OpenAI would almost certainly acquire that dataset to train its own model, and vice versa.
What AB 2013 requires from AI developers
California's Assembly Bill 2013, signed into law on September 28, 2024, imposes substantial information-disclosure requirements on developers of generative AI systems. The law requires any developer of a generative artificial intelligence system or service made publicly available to Californians after January 1, 2022, to post documentation on their website regarding the data used to train the system.
The required disclosures include a high-level summary of the datasets used in development, along with 12 specific categories of information. These categories encompass the sources or owners of the datasets, a description of how the datasets further the intended purpose of the AI system, the number of data points included in the datasets, and the types of data points within the datasets.
Additional disclosure requirements include whether the datasets contain data protected by copyright, trademark, or patent, whether the datasets were purchased or licensed by the developer, whether they include personal information or aggregate consumer information, and whether there was any cleaning, processing, or other modification to the datasets. Developers must also disclose the time period during which data was collected, the dates datasets were first used during development, and whether the system used synthetic data generation.
The law provides three narrow exceptions: AI models used solely to help ensure security and integrity, models used solely in the operation of aircraft in the national airspace, and models developed for national security and made available only to a federal entity.
According to xAI's complaint, the law does not define key terms like "datasets" or "data point," nor does it explain how "high-level" a summary must be to satisfy the requirement. The filing argues this vagueness leaves developers guessing whether they can satisfy disclosure obligations by simply noting "the Internet" as a dataset source, or whether they must include specific details like "state and federal court websites" or the "Library of Congress."
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Constitutional challenges across multiple fronts
xAI's legal challenge asserts that AB 2013 violates the Takings Clause of the Fifth Amendment, which prohibits the government from taking private property for public use without just compensation. The complaint argues that by compelling xAI to disclose trade secrets about its datasets, the law effects a per se taking because it appropriates xAI's right to exclude others from possessing confidential information.
"The key feature of the trade-secret property right is its secrecy," the complaint states. "By definition, a 'trade secret' is information that derives independent economic value from not being generally known to the public or to other persons who can obtain economic value from its disclosure or use."
The filing explains that trade secrets are protected property rights under both the federal Defend Trade Secrets Act and California law. Courts around the country already recognize that AI training data constitute economically valuable, confidential information warranting protection against disclosure during litigation and discovery proceedings, according to the complaint.
xAI points to court orders in litigation involving OpenAI and Perplexity AI that designated training data as "Highly Confidential – Attorneys' Eyes Only" and "extremely sensitive." The complaint argues that given the obvious economic importance of this highly confidential data, xAI makes significant efforts to prevent disclosure and maintain secrecy through employee confidentiality provisions, access-gated storage systems, role-based access requirements, and time-limited access controls.
The lawsuit also challenges AB 2013 under the First Amendment, arguing that forcing xAI to disclose information about its datasets compels speech in violation of free speech protections. According to the complaint, the law is a content-based regulation that triggers strict scrutiny because it compels xAI to disclose specific content related to its AI models.
The filing argues that AB 2013 compounds First Amendment problems by discriminating based on viewpoint. The law exempts from its requirements AI models with certain favored purposes, including those whose sole purpose is to help ensure security and integrity or the operation of aircraft in the national airspace, or models developed for national security, military, or defense purposes made available only to a federal entity.
"The First Amendment does not permit California to compel private speech based on its perception that certain ideas are important enough to be kept secret, and that other, less-favored ideas are not, in the state's eyes, valuable enough to be kept confidential," the complaint states.
The lawsuit further challenges AB 2013 as unconstitutionally vague under the Due Process Clause of the Fourteenth Amendment. xAI argues the law fails to provide fair notice to a person of ordinary intelligence as to what it requires, citing undefined terms, unclear disclosure standards, and internal inconsistencies about which datasets are covered.

Who benefits from mandatory disclosure?
The complaint questions whether AB 2013's disclosure requirements actually help consumers, arguing that the parties most likely to benefit are competitors rather than the public. According to the filing, consumers are far more interested in evaluating how an end-product performs the tasks it is given than in obtaining technical details about the datasets and processes companies use to train their models.
"While it is unclear what the consumer is even supposed to do with the information AB 2013 requires companies like xAI to disclose, xAI's rivals have both the wherewithal and the motivation to use detailed information about xAI's datasets to replicate xAI's models or improve their own AI systems, thus robbing xAI of a competitive edge in the exponentially growing AI market," the complaint states.
The lawsuit argues that why would consumers care how many data points are in a given training dataset? Without requisite technical expertise, there is no way for a consumer to know whether an AI model related to improving driving directions that is trained on a dataset containing a thousand different road maps is better than one trained on ten thousand road maps. The consumer's best metric is the end product, not the amount of data used, according to the filing.
xAI points to its own transparency practices as more valuable to consumers than the information AB 2013 would require. The company releases "Model Cards" for each of its AI models that focus on outputs, including tests evaluating whether models exhibit political bias when asked about controversial topics and how models fare when being pressured to give wrong answers to questions.
The complaint notes that xAI already discloses results of a wide variety of tests, including the political bias of its AI models. These disclosures are appropriately focused on the AI model's outputs – the information actually relayed to users – rather than inputs like exhaustive lists of raw datasets and data points used to develop and refine AI models, which do not give consumers any meaningful way to evaluate effectiveness.
California's stated consumer protection goals
While AB 2013 itself does not contain any statement of purpose, California's legislature claimed the goal is to provide transparency to consumers of AI systems and services by providing important documentation about the data used to train the services and systems they are being offered. According to legislative analysis documents cited in the complaint, the law's disclosure obligations purportedly aim to help identify and mitigate biases.
The legislature based this on the notion that "garbage in" is "garbage out" – the quality of the data going in affects the quality of the ultimate product, according to assembly floor analysis documents. Yet xAI argues it is hard to see how AB 2013's requirements accomplish that goal, as they do not require AI companies to disclose the kinds of information that consumers typically find useful, such as how well an AI model has performed when given particular tasks.
Regulatory scrutiny of AI training data has intensified globally, with European authorities examining how platforms process personal information for model development. The Irish Data Protection Commission launched an inquiry in April 2025 investigating whether xAI's subsidiary X Ireland UC lawfully processed EU and EEA users' personal data for training Grok large language models.
The lawsuit notes that the only thing AB 2013 seems to do is force developers to provide their competitors with a roadmap to mirror their success. It gives competitors invaluable insight into how an AI model is trained, what datasets are used, what datasets are not, and more – all information that others can exploit for their own competitive advantage.
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Investment-backed expectations and retroactive application
xAI's complaint emphasizes that the company invested in and developed its trade secrets starting in March and April of 2023, well before AB 2013 was first introduced in January 2024. The filing argues that California law has protected trade secrets for decades, and the same is true of federal law, giving xAI reasonable expectations that the datasets it used to train AI models would be protected.
"xAI had no reason to know or expect that it would not be able to reap the value of its trade secrets in developing specialized datasets to train its AI models," the complaint states. "That is especially so because the provisions at issue were not introduced until the calendar year after xAI first began acquiring and developing datasets for training its models, and several months after xAI released its first AI model to the public."
The lawsuit notes that California has compounded the disruption of settled expectations by rendering its law retroactive to January 2022. Retroactive statutes raise special concerns, according to the complaint, which argues that retroactive aspect is particularly troubling as it targets xAI's investments made in developing trade secrets at a time when the company had no notice whatsoever that the highly confidential information it was developing could become subject to sweeping disclosure obligations.
AB 2013 covers all AI models released since 2022, even if they are no longer regularly used by consumers. The complaint argues this broad scope underscores the disconnect between the law's obligations and the legislature's consumer-transparency goal.
Security measures protecting xAI's datasets
The complaint details extensive security measures xAI has implemented to safeguard its datasets and related information. All employees sign confidentiality provisions when they execute their employment agreements to work for xAI, and to work in the AI development process in particular. These confidentiality provisions communicate to employees that all parts of the development process are xAI's non-public, proprietary information, that information is to be used solely for development of xAI's models, and that none of it should be publicly disclosed.
xAI has made sure that its datasets are access-gated and accessible only to individuals with appropriate levels of access, according to the filing. The location of data storage is known only to the individuals that need access for an approved purpose, and xAI's systems alert security when certain datasets are accessed or moved. That alert ensures that the appropriate xAI team can contact the individual whose actions were flagged to ensure the access was authorized and make sure that the information is being used only for an authorized purpose.
The company has made clear to employees that they can and should report any unauthorized access they observe to xAI's legal or incident response teams so that the company can actively remedy any potential misuse of its datasets. To enhance security even further, xAI has introduced role-based access requirements, which ensure that an employee's access is limited to the datasets they actually need to use.
xAI also has a robust confidentiality policy that underscores the confidentiality provisions in employees' employment contracts. This policy states explicitly that all information, including xAI's datasets and process and methods, is protected and non-public. In addition, xAI will implement time-limited access controls, which will require a user to gain re-approval after a set number of days, so that permission to access data does not extend beyond the needs of a particular project.
Broader AI industry implications
The lawsuit arrives as AI companies face mounting pressure over training data practices across multiple jurisdictions. Major AI developers have faced increasing scrutiny regarding their training data practices, with several class-action lawsuits filed in various jurisdictions alleging copyright infringement and unauthorized use of personal data.
xAI previously filed litigation challenging other companies' market practices, including an antitrust lawsuit against Apple and OpenAI in August 2025 alleging conspiracy to monopolize generative AI chatbot and smartphone markets. The company has also taken legal action to protect its own intellectual property, suing a former engineer in August 2025 for allegedly stealing trade secrets related to Grok technology.
The complaint emphasizes the extraordinary financial investment required for AI development. Advanced AI models can cost greater than hundreds of millions of dollars to develop, with xAI investing billions in its intellectual property development, according to the filing. The lawsuit notes that maintaining the utmost secrecy in the development of AI models is of critical importance given the competitive landscape.
California's intensifying regulatory focus on AI extends beyond training data disclosure, with updates to the California Consumer Privacy Act taking effect January 1, 2026, that expand requirements around consumer consent and data transfers to third parties. The state has also implemented requirements for companion chatbots to disclose their artificial nature and established frameworks for AI use in healthcare settings.
The lawsuit argues that courts around the country already recognize AI training data constitute economically valuable, confidential information warranting protection. Examples cited include court orders in litigation involving OpenAI ChatGPT and Perplexity AI cases where training data received "Highly Confidential – Attorneys' Eyes Only" designations during discovery proceedings.
Relief sought and next steps
xAI requests multiple forms of relief from the court. The company seeks a declaration that AB 2013's provisions effect an uncompensated taking of trade secrets owned by xAI in violation of the Takings Clause of the U.S. Constitution. xAI also requests declarations that AB 2013 unconstitutionally compels speech in violation of the First Amendment and is unconstitutionally vague in violation of the Due Process Clause.
The complaint requests both preliminary and permanent injunctions preventing Attorney General Bonta, as well as all officers, agents, and employees subject to his supervision, direction, or control, from enforcing the provisions of AB 2013 against xAI. The company also seeks costs and reasonable attorney's fees incurred in the action pursuant to federal law and other applicable statutes.
The lawsuit represents the latest in a series of legal challenges AI companies have mounted against regulatory requirements they argue threaten competitive advantages derived from proprietary development processes. AB 2013's broad disclosure requirements and January 1, 2026 effective date create urgency for xAI's constitutional challenge, as the company faces imminent obligations to publicly disclose information it characterizes as fundamental to its business model.
The outcome could establish important precedents for how states can regulate AI development transparency without triggering constitutional protections for trade secrets and free speech. The case also highlights ongoing tensions between public interest in AI transparency and company interests in protecting competitive advantages derived from dataset curation and model training processes.
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Timeline
- September 28, 2024: California Governor signs Assembly Bill 2013 into law requiring AI developers to disclose training data information
- January 2024: AB 2013 first introduced in California legislature
- March-April 2023: xAI begins investing in developing AI models and acquiring datasets for Grok
- November 2023: xAI succeeds in initial limited public release of Grok-1
- March 2024: xAI launches full public release of Grok-1
- August 2024: xAI releases Grok-2 to the public
- August 25, 2025: X Corp. and xAI file antitrust lawsuit against Apple and OpenAI
- August 31, 2025: xAI sues former engineer for allegedly stealing trade secrets
- April 11, 2025: Irish Data Protection Commission launches inquiry into Grok LLM training
- July 2025: xAI releases Grok-4, driving 17% user surge
- December 29, 2025: xAI files federal lawsuit challenging AB 2013 in U.S. District Court for the Central District of California
- January 1, 2026: AB 2013 scheduled to take effect
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Summary
Who: X.AI LLC, the Nevada-based limited liability company with principal place of business in Palo Alto, California, filed the lawsuit against California Attorney General Rob Bonta in his official capacity. xAI produces and develops AI models including the Grok series, which it shares broadly with members of the public.
What: Federal lawsuit challenging California Assembly Bill 2013, a transparency law requiring AI developers to publicly disclose detailed information about datasets used to train generative AI systems. The complaint alleges AB 2013 violates the Takings Clause, First Amendment free speech protections, and Due Process Clause through vague requirements that force disclosure of trade secrets worth billions of dollars without just compensation.
When: The lawsuit was filed December 29, 2025, in the United States District Court for the Central District of California. AB 2013 is scheduled to take effect January 1, 2026, creating immediate urgency for xAI's constitutional challenge. The law applies retroactively to AI systems made publicly available since January 1, 2022.
Where: The case was filed in the U.S. District Court for the Central District of California, where Attorney General Bonta performs his official duties. AB 2013 applies to developers of generative AI systems or services made publicly available to Californians for use, regardless of where the company is located. xAI is organized under Nevada law but has its principal place of business in California.
Why: xAI argues AB 2013 forces the company to reveal confidential information about how it develops, trains, and refines its AI models – all of which are trade secrets fundamental to its business and otherwise protected under state and federal law. The company contends that while purportedly aimed at consumer transparency, the law's primary beneficiaries are competitors who would gain invaluable insight into datasets used, training methodologies, and development processes. xAI characterizes the law as a "trade-secrets-destroying disclosure regime" that threatens the AI industry by eviscerating competitive advantages derived from proprietary dataset curation and model training processes, all without providing just compensation or meaningful consumer protection benefits.