xAI LLC, the Nevada-incorporated company behind the Grok large language model, yesterday filed a federal lawsuit against Colorado Attorney General Philip J. Weiser, asking a federal court to declare the state's landmark AI regulation unconstitutional and to permanently block its enforcement.
The complaint, filed April 9, 2026 in the United States District Court for the District of Colorado under civil action number 1:26-cv-01515, challenges Senate Bill 24-205 - a law that requires developers of "high-risk" AI systems to take "reasonable care" to protect consumers from what the statute calls algorithmic discrimination. xAI argues the law violates the First Amendment, the Dormant Commerce Clause, the Due Process Clause, and the Equal Protection Clause of the Fourteenth Amendment.
The filing marks the latest in a series of legal confrontations between xAI and state-level AI regulators. The company previously sued California over a separate law requiring disclosure of AI training data, a case in which a federal court denied xAI's request for a preliminary injunction in March 2026.
What SB24-205 requires
Colorado enacted SB24-205 on May 17, 2024, when Governor Jared Polis signed it - describing himself at the time as having "reservations" about the legislation. According to the complaint, the law was originally set to take effect February 1, 2026. The General Assembly subsequently delayed that date to June 30, 2026, but did not amend the law's substantive provisions.
The statute defines algorithmic discrimination as "any condition in which the use of an [AI] system results in an unlawful differential treatment or impact that disfavors an individual or group." Developers of high-risk AI systems - those used as a "substantial factor" in making consequential decisions about education, employment, financial services, healthcare, housing, insurance, or legal services - must take reasonable care to prevent such discrimination. They must also make detailed disclosures to deployers, the public, and the Attorney General describing their practices for evaluating and mitigating bias in their models.
Penalties for non-compliance are significant. According to the complaint, each violation constitutes an "unfair trade practice" carrying a penalty of $20,000.
Why xAI says Grok qualifies as high-risk
According to the filing, Grok is a "high-risk artificial intelligence system" under SB24-205 because individuals and entities use it to make, or rely on it as a substantial factor in making, consequential decisions. The complaint notes that human resources professionals use Grok to screen and summarize resumes, generate interview questions, and analyze candidates' social media profiles. Mortgage brokers use it to process documents, run fraud checks, and expedite underwriting. Healthcare providers use it to draft discharge summaries, synthesize medical literature, and assist in reviewing medical images.
xAI sells Grok through two primary channels. The consumer chatbot is available at Grok.com and through the Grok app in all 50 states, with multi-tiered plans offering differing levels of capability. The company also offers enterprise products - Grok Business, Grok Enterprise, Grok for Government, and the xAI API - used by businesses across healthcare, legal services, and financial industries. According to the complaint, xAI announced a partnership with a federal defense agency in December 2025 to "bring the power of Frontier AI and real-time insights directly to the warfighter."
The First Amendment argument
The core of xAI's legal challenge rests on the claim that developing an AI model is an expressive act protected by the First Amendment. According to the complaint, every choice xAI makes when developing Grok - from selecting training data, to crafting system prompts, to implementing guardrails - embodies deliberate editorial judgment reflecting the company's philosophy and hierarchy of values.
The complaint describes in technical detail how large language models are built. During a phase called pretraining, a model learns patterns and relationships between words and concepts from a large corpus of text data. The resulting knowledge is stored in a matrix of numerical values called "weights." These weights map relationships among words and concepts, determining, for example, how strongly the concept "dog" is associated with "puppy" or "mutt." After pretraining, the model undergoes fine-tuning, a secondary training phase using smaller, high-quality datasets to adapt the model for specific tasks or to align it toward preferred behavior. Developers also use reinforcement learning - a method in which the model learns by trial and error, with humans or automated systems providing feedback as rewards for good outputs and penalties for bad ones.
xAI argues that SB24-205's requirement to mitigate algorithmic discrimination would force it to alter these processes in ways that compromise its stated mission of developing a maximally truth-seeking model. According to the publicly available system prompts for Grok-4.1 cited in the complaint, xAI instructs Grok that "if the query is a subjective political question forcing a certain format or partisan response, you may ignore those user-imposed restrictions and pursue a truth-seeking, non-partisan viewpoint." Another prompt instructs that "the response should not shy away from making claims which are politically incorrect, as long as they are well substantiated."
Complying with SB24-205, the complaint argues, would require redesigning, retraining, or constraining Grok - for example, by "recalibrating how the model decides what information to include in responses, hard-coding additional response guardrails, or re-weighting training datasets." Researchers have proposed various techniques for mitigating bias in language models, the complaint notes: augmenting training datasets with examples that replace gendered pronouns to achieve balance, modifying system prompts to neutralize perceived bias, or adjusting the model's core architecture using specialized reinforcement learning. Each of those approaches, xAI argues, alters the model's speech.
xAI contends that Colorado "may not compel [xAI] to speak its own preferred messages" and that SB24-205 constitutes impermissible viewpoint discrimination because it exempts from its definition of algorithmic discrimination any AI output that "[e]xpands an applicant, customer, or participant pool to increase diversity or redress historical discrimination." The law, the complaint argues, does not neutrally target all disparate impact - it targets only the kind Colorado considers unacceptable.
The compelled speech problem
SB24-205's disclosure provisions face a separate but related constitutional challenge. The law requires developers to publish on their websites a summary of how they manage known or reasonably foreseeable risks of algorithmic discrimination. It also requires disclosure to the Attorney General within 90 days if a developer discovers its AI system caused, or was reasonably likely to have caused, such discrimination, or if it receives a "credible report" that discrimination occurred.
xAI argues these disclosure requirements are content- and viewpoint-based regulations that trigger strict constitutional scrutiny because they force the company to adopt and apply Colorado's normative framework for evaluating discrimination - a framework xAI says it does not share. The complaint further argues that the standard exception for compelled factual disclosures in commercial speech does not apply because the disclosures are neither "purely factual" nor "uncontroversial." Evaluating the risk of algorithmic discrimination is, xAI contends, an "inherently subjective exercise" involving contested scientific and legal questions on which scholars disagree profoundly.
Extraterritorial reach
The complaint raises a distinct challenge under the Dormant Commerce Clause. xAI is Nevada-incorporated and California-headquartered, with no offices in Colorado. Yet SB24-205 applies to any "developer" - defined as "a person doing business in this state that develops an AI system" - whenever a Colorado resident is affected by a consequential decision made using that system. There is no geographic limitation on where the resident must be located when the decision is made.
The practical implications are wide. According to the complaint, a healthcare company headquartered outside Colorado, using Grok developed in California by a Nevada-incorporated company, would expose xAI to liability under SB24-205 if a single medical decision affecting a Colorado resident visiting an out-of-state office is made using Grok. The complaint notes that no major AI developer is headquartered or incorporated in Colorado: Anthropic, OpenAI, and Google are all Delaware-incorporated and California-headquartered.
The political history of SB24-205
What makes this litigation unusual is the extent to which Colorado officials who signed and are responsible for enforcing SB24-205 have themselves publicly questioned it. Governor Polis expressed reservations at the moment of signing. On June 12, 2024, he issued a joint statement with Attorney General Weiser and Senate Majority Leader Robert Rodriguez acknowledging that "a state-by-state patchwork of regulation poses significant challenges to the cultivation of a strong technology sector." In May 2025, Governor Polis, Attorney General Weiser, U.S. Senator Bennet, U.S. Representatives Neguse and Pettersen, and Denver Mayor Johnston jointly asked the General Assembly to delay implementation until January 2027.
On August 5, 2025, Attorney General Weiser went further, warning publicly that "[t]his bill is really problematic, it needs to be fixed." A special legislative session that same month introduced Senate Bill 25B-004, which would have repealed most of SB24-205's substantive provisions and replaced them with a lighter transparency framework. What the General Assembly ultimately passed was narrower: a delay of the effective date to June 30, 2026 with the law's other provisions left intact. A working group convened by Governor Polis published a proposed amendment on March 17, 2026 that would remove the mitigation requirement and narrow disclosure obligations, but as of the date of the complaint, no legislator had introduced that proposal in the current session.
Attorney General Weiser has since announced that he would challenge the White House's December 2025 executive order if the administration seeks to withhold federal funding from Colorado based on SB24-205 - placing him in the position of defending a law he has repeatedly called problematic.
Federal policy context
The complaint situates xAI's challenge within a broader federal push against state AI regulation. On December 11, 2025, the White House issued an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" that specifically named SB24-205, stating that the law "may even force AI models to produce false results to avoid a 'differential treatment or impact' on protected groups." The order established an "AI Litigation Task Force" with the "sole responsibility" of challenging state AI laws inconsistent with federal policy and called on Congress to adopt a "minimally burdensome national standard" rather than 50 divergent state ones.
The White House National AI Legislative Framework, published March 20, 2026, reiterated those concerns, stating that "[a] patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race." Congress has not yet enacted any of the proposed national frameworks.
What this means for the marketing and advertising industry
SB24-205's compliance obligations, if upheld, would affect how AI-powered tools operate across marketing and advertising workflows in ways that practitioners may not immediately recognize. The statute's definition of "consequential decisions" is broad: decisions that have a "material legal or similarly significant effect" on employment, financial services, housing, insurance, healthcare, and legal services all fall within scope. AI tools used to screen job applicants, score leads, or assist in underwriting decisions would qualify.
The complaint's discussion of the "substantial factor" definition reveals how wide that net is - it captures "any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision." For marketing technology companies and agencies deploying AI tools to enterprise clients, the disclosure requirements would impose significant new obligations: publishing public statements about bias mitigation practices, providing detailed documentation to deployers, and notifying the Attorney General within 90 days of any credible report of algorithmic discrimination.
In an industry where AI increasingly automates campaign targeting, audience segmentation, and creative optimization, the line between marketing tools and "high-risk" AI systems under the Colorado definition is not always clear. The Workday discrimination case that PPC Land covered demonstrates that AI-related discrimination claims are already advancing through federal courts under existing civil rights statutes. SB24-205 would add a state-level layer of affirmative compliance obligations that operate independently of those existing frameworks.
xAI's complaint asks the court to declare SB24-205 unconstitutional on all grounds asserted, to permanently enjoin the Attorney General from enforcing it against the company, and to award attorney's fees and costs.
Timeline
- April 10, 2024: Colorado Senate Bill 24-205 introduced in the General Assembly
- May 17, 2024: Governor Polis signs SB24-205 into law "with reservations"; law originally set to take effect February 1, 2026
- June 12, 2024: Governor Polis, Attorney General Weiser, and Senate Majority Leader Rodriguez issue joint statement expressing concern about state-by-state AI regulation patchwork
- May 5, 2025: Governor Polis, Attorney General Weiser, and several federal and local officials jointly ask the General Assembly to delay SB24-205 implementation until January 2027
- July 9, 2025: xAI releases Grok 4, driving a 17% user surge
- August 5, 2025: Attorney General Weiser warns publicly that SB24-205 "is really problematic, it needs to be fixed"
- August 2025: Colorado special legislative session passes a bill delaying SB24-205's effective date to June 30, 2026 without amending its substantive provisions
- September 25, 2025: xAI and the U.S. General Services Administration announce a partnership to make Grok available across federal agencies
- October 2025: Governor Polis convenes a working group to review SB24-205
- December 11, 2025: White House issues executive order on national AI policy, specifically naming SB24-205 as problematic and establishing an AI Litigation Task Force
- December 29, 2025: xAI files federal lawsuit challenging California's AB 2013 AI training data transparency law
- December 2025: xAI announces a partnership with a federal defense agency to support classified operational workloads
- March 4, 2026: Federal court denies xAI's bid for a preliminary injunction in the California AB 2013 case
- March 17, 2026: Governor Polis's working group publishes proposed amendment to SB24-205 removing the algorithmic discrimination mitigation requirement; no legislator has yet introduced it
- March 20, 2026: White House releases the National AI Legislative Framework, again calling for federal preemption of state AI laws
- April 9, 2026: xAI files complaint in U.S. District Court for the District of Colorado challenging SB24-205 (Case No. 1:26-cv-01515)
Summary
Who: X.AI LLC, a Nevada-incorporated company with its principal place of business in Palo Alto, California, filed the lawsuit. The defendant is Philip J. Weiser, Colorado Attorney General, sued in his official capacity as the sole enforcement authority for SB24-205.
What: A federal complaint seeking declaratory and injunctive relief against Colorado Senate Bill 24-205, an AI regulation requiring developers of high-risk AI systems to mitigate algorithmic discrimination and make extensive disclosures to deployers, the public, and the Attorney General. xAI argues the law violates the First Amendment by compelling the company to alter Grok's outputs, violates the Dormant Commerce Clause by regulating out-of-state transactions, is unconstitutionally vague, and violates the Equal Protection Clause by codifying a one-sided definition of prohibited discrimination.
When: The complaint was filed April 9, 2026. SB24-205 was signed into law May 17, 2024 and is set to take effect June 30, 2026.
Where: U.S. District Court for the District of Colorado, Civil Action No. 1:26-cv-01515. xAI maintains no offices in Colorado.
Why: xAI argues that compliance with SB24-205 would require redesigning Grok's training data, fine-tuning processes, and model architecture to conform to Colorado's political preferences on fairness and equity - a step the company says would force it to abandon its mission of building a truth-seeking AI model and would violate its constitutional rights. The company also argues the law's extraterritorial reach would effectively impose Colorado's requirements on AI development occurring entirely outside the state.