The United States Department of Justice last week entered one of the most consequential AI regulatory battles in the country, filing a complaint in intervention in a federal lawsuit brought by Elon Musk's xAI LLC against Colorado Attorney General Philip J. Weiser. The intervention, submitted on April 24, 2026 to the United States District Court for the District of Colorado under Civil Action No. 1:26-cv-01515-DDD-CYC, marks the first time the federal government has directly joined litigation to contest a state AI law. The target is Colorado Senate Bill 24-205, a consumer protection measure signed in May 2024 that is set to take effect on June 30, 2026.

This is not a minor procedural filing. The DOJ is asking the court for declaratory and injunctive relief - in other words, a ruling that the law is unconstitutional and an order blocking its enforcement entirely. According to the complaint, Acting Attorney General Todd Blanche certified in writing on April 24, 2026, that this case "is of general public importance," a certification required under 42 U.S.C. Section 2000h-2 for the United States to intervene in equal protection litigation.

What SB24-205 actually requires

Understanding the federal government's intervention requires a clear-eyed look at what Colorado's law does at a technical level. SB24-205 targets "high-risk artificial intelligence systems" - defined as any AI system that, when deployed, makes or is a substantial factor in making a "consequential decision." According to the complaint, a consequential decision covers eight specific categories: education enrollment or an education opportunity; employment or an employment opportunity; a financial or lending service; an essential government service; healthcare services; housing; insurance; or a legal service.

The statute defines a "substantial factor" as any factor that assists in making a consequential decision, is capable of altering the outcome of that decision, and is generated by an AI system. Crucially, the definition extends to any use of an AI system to generate content, a decision, a prediction, or a recommendation concerning a consumer that is then used as the basis for a consequential decision. The scope is broad. An AI-generated credit score interpretation, a model that ranks job applicants, a system that recommends insurance pricing - all of these would potentially qualify.

Developers under the statute are defined as any person doing business in Colorado that develops or intentionally and substantially modifies an AI system. Deployers are those who use a high-risk AI system. Both face obligations, though deployers carry a heavier administrative burden.

For developers, SB24-205 imposes two main duties. The first is a duty of care: developers must use reasonable care to protect consumers from any known or reasonably foreseeable risks of "algorithmic discrimination" arising from the intended and contracted uses of their systems. The second is a set of disclosure duties. According to the complaint, a developer must disclose to the Colorado Attorney General - in a form and manner the Attorney General prescribes - any known or reasonably foreseeable risks of algorithmic discrimination, without unreasonable delay. Developers must also make available to deployers a detailed package of documentation covering the type of training data used, known risks of algorithmic discrimination, mitigation measures taken, the system's intended purpose, how the system was evaluated for bias mitigation before deployment, and any information reasonably necessary to help deployers monitor ongoing performance for discrimination risks.

Deployers face additional obligations. Beyond the same duty of care as developers, deployers must implement a risk management policy and program that specifies and incorporates the principles, processes, and personnel the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. According to the complaint, that program must be an iterative process, planned, implemented, and regularly and systematically reviewed and updated.

The assessment obligations are particularly demanding. At least annually - and within 90 days after any intentional and substantial modification to an AI system - a deployer must complete an impact assessment that includes an analysis of whether the deployment poses any known or reasonably foreseeable risks of algorithmic discrimination, and what steps have been taken to mitigate those risks. If a deployer discovers that a deployed system has actually caused algorithmic discrimination, notice must be sent to the Colorado Attorney General within 90 days of that discovery.

Deployers must also publish on their websites a statement summarizing how they manage known or reasonably foreseeable risks of algorithmic discrimination from each high-risk system they deploy, and they must update that statement periodically. The consumer-facing disclosure requirements apply to all deployers. Smaller deployers - those with fewer than 50 employees who meet specific conditions - may be partially exempt from the risk management policy, impact assessment, and website disclosure requirements, but cannot avoid the consumer notice obligations entirely.

The Equal Protection argument

The DOJ's legal challenge rests primarily on the Equal Protection Clause of the Fourteenth Amendment, and the theory is specific. According to the complaint, SB24-205 defines algorithmic discrimination as including any use of an AI system that results in an unlawful differential treatment or impact that disfavors an individual or group on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, or veteran status.

The problem the DOJ identifies is structural. The statute imposes liability based on statistical disparities alone - regardless of whether the AI developer or deployer intended any discrimination. To avoid that liability, a developer or deployer must analyze outputs, identify statistical disparate impacts, and then recalibrate the algorithm to eliminate the disparity. The DOJ argues this necessarily means making decisions based on protected demographic characteristics - which is itself a form of discrimination compelled by the state.

The complaint illustrates this with a concrete example. If an algorithm used by employers to screen job applicants inadvertently disadvantages white Americans by primarily selecting applicants from minority zip codes or with names more common among minorities, a developer or deployer must alter the algorithm to eliminate the unintentional disparate impact. To achieve that correction, the developer or deployer must recalibrate the algorithm to be more favorable to white Americans. In zero-sum contexts such as hiring and student admissions, making the algorithm more favorable to one demographic group necessarily means making it less favorable to another.

According to the complaint, the Equal Protection Clause "precludes Colorado's attempt to force discriminatory ideology on the AI industry."

The DOJ raises a second distinct constitutional objection under what it calls "authorized discrimination." SB24-205 explicitly exempts from the definition of algorithmic discrimination the offer, license, or use of AI systems for the sole purpose of expanding an applicant, customer, or participant pool to increase diversity or to redress historical discrimination. According to the complaint, this means liability under SB24-205 depends on which demographic group is favored by a given output - a viewpoint-based distinction that the DOJ argues violates the Equal Protection Clause independently of the disparate-impact mechanism. Courts have recognized two compelling interests that can permit race-based government action. The DOJ's complaint contends that SB24-205 satisfies neither.

The federal policy context

The DOJ's move does not come from nowhere. According to the White House AI framework published in March 2026, the Trump administration has been pushing for federal preemption of state AI laws, arguing that state-by-state regulation creates a patchwork of up to 50 different regulatory regimes that increases compliance burdens, particularly for start-ups.

The president's Executive Order No. 14365, titled Ensuring a National Policy Framework for Artificial Intelligence and issued on December 11, 2025, is quoted directly in the DOJ's complaint: "United States leadership in [AI] will promote United States national and economic security across many domains." That same executive order specifically named Colorado's prohibition on algorithmic discrimination as a statute that could force AI models to produce inaccurate results.

The complaint also cites the July 2025 document titled "Winning the Race, America's AI Action Plan" from the Executive Office of the President, which states that "whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits." The intervention is, in that sense, a policy statement as much as a legal one - an assertion that state-level AI regulation, even regulation framed as civil rights protection, conflicts with federal interests in AI competitiveness.

Colorado's own reservations

What makes this situation unusual is that Colorado officials have themselves expressed doubts about SB24-205. According to the complaint, both the Governor and the Attorney General of Colorado have repeatedly acknowledged that SB24-205 is deeply misguided and have called for revisions. Governor Jared Polis, who signed the bill in May 2024, described himself at the time as having reservations about the legislation. A working group convened by Governor Polis published proposed amendments in March 2026 that would remove the algorithmic discrimination mitigation requirement entirely - but as of the filing date, no legislator had introduced the amendment in the General Assembly.

The law's effective date was already delayed once. Originally set to take effect on February 1, 2026, the Colorado General Assembly pushed that date to June 30, 2026 - without amending the substantive provisions. The law remains, as written, on a collision course with federal constitutional claims, federal policy, and the xAI lawsuit that the DOJ has now joined.

xAI's position and the broader litigation

xAI filed its original federal lawsuit on April 9, 2026, raising several constitutional arguments. Beyond the Equal Protection claims now shared by the DOJ, xAI's complaint argued that SB24-205 violates the First Amendment by compelling the company to alter Grok's outputs, violates the Dormant Commerce Clause by regulating out-of-state transactions, is unconstitutionally vague under the Due Process Clause, and is unconstitutionally overbroad.

The DOJ's complaint in intervention does not simply repeat xAI's arguments. It narrows to the Equal Protection Clause - the claim for which the United States has a specific statutory right to intervene under 42 U.S.C. Section 2000h-2. But the complaint notes explicitly that SB24-205 is unconstitutional in other ways too. By requiring AI developers and deployers to mitigate the risk of algorithmic discrimination, it compels them to accommodate particular messages, censors their speech based on content, and chills their speech by requiring specific editorial decisions about training data, prompts, and model constraints - all to generate what the complaint describes as Colorado's preferred expressive outputs.

xAI is no stranger to constitutional litigation over AI laws. The company filed a federal lawsuit in December 2025 against California over AB 2013, a transparency law requiring disclosure of AI training data. A federal court denied xAI's request for a preliminary injunction in that case in March 2026. The company also filed an antitrust lawsuit in August 2025 against Apple and OpenAI, seeking over $1 billion in damages.

Why this matters for marketing and ad tech

State AI regulation has become a material compliance concern for any company deploying AI at scale, including in digital advertising. The SB24-205 definition of "high-risk artificial intelligence system" is capacious enough to potentially cover AI tools used in employment hiring pipelines, financial services, and insurance - all domains where algorithmic systems are increasingly embedded in commercial decision-making.

As PPC Land has documented in coverage of the xAI lawsuit, the disclosure requirements under SB24-205 would, if they survive legal challenge, impose significant operational obligations on marketing technology companies and agencies that deploy AI tools for enterprise clients. Any AI system that makes or substantially contributes to a consequential decision concerning a Colorado resident would require public statements about bias mitigation practices, detailed documentation provided to deployers, and notification to the Colorado Attorney General within 90 days of any credible finding of algorithmic discrimination.

The annual impact assessment requirement is particularly significant. Companies using AI for employment-related advertising targeting - for example, systems that direct job postings to specific demographic segments - would need to formally document and review those systems at least once per year, and again within 90 days of any substantial modification. Smaller companies with fewer than 50 employees are partially exempted from some requirements, but not from consumer-facing notice obligations.

The DOJ's intervention raises the stakes considerably. Federal courts may now need to resolve whether the Equal Protection Clause permits states to impose demographic-aware obligations on AI developers and deployers as a form of civil rights enforcement - a question with implications that extend well beyond Colorado. Connecticut's legislature passed its own comprehensive AI bill on April 21, 2026, including provisions on automated employment decisions that overlap with SB24-205's regulatory scope. The outcome in Colorado could shape the constitutional ceiling for state-level AI regulation across the country.

Timeline

Summary

Who: The United States Department of Justice, as Plaintiff-Intervenor, joined xAI LLC in its federal lawsuit against Colorado Attorney General Philip J. Weiser. The DOJ's complaint was signed by attorneys from both the Civil Division and the Civil Rights Division, including Senior Litigation Counsel Alexandra McTague Schulte and Civil Rights Division attorneys Greta Gieseke and Joshua R. Zuckerman. Acting Attorney General Todd Blanche personally certified the case as a matter of general public importance.

What: The DOJ filed a complaint in intervention seeking declaratory and injunctive relief against Colorado Senate Bill 24-205, a consumer protection statute requiring developers and deployers of high-risk AI systems to use reasonable care to prevent algorithmic discrimination, implement risk management programs, conduct annual impact assessments, and make extensive disclosures to the Colorado Attorney General, deployers, and the public. The DOJ argues the law violates the Equal Protection Clause of the Fourteenth Amendment by compelling developers and deployers to discriminate on the basis of race, sex, religion, and other protected characteristics in order to eliminate statistical disparities in AI outputs.

When: The complaint in intervention was filed on April 24, 2026. Colorado's SB24-205 was signed into law on May 17, 2024, and is scheduled to take effect on June 30, 2026.

Where: United States District Court for the District of Colorado, Civil Action No. 1:26-cv-01515-DDD-CYC. The case is captioned United States of America, Plaintiff-Intervenor, and X.AI LLC, Plaintiff, v. Philip J. Weiser, Colorado Attorney General, Defendant.

Why: The DOJ argues that SB24-205 forces AI developers and deployers to engage in demographic-calibrated recalibration of algorithmic outputs in order to avoid liability for statistically disparate results - a process the complaint characterizes as itself constituting unconstitutional discrimination. The federal government also contends that the law's explicit exemption permitting algorithmic differential treatment for diversity and historical-discrimination-remediation purposes authorizes viewpoint-based discrimination that independently violates the Equal Protection Clause. The intervention reflects the broader Trump administration position that state-level AI regulation poses risks to U.S. competitiveness and national security in AI development.

Share this article
The link has been copied!