OpenAI this month publicly backed an Illinois state bill that would protect frontier AI developers from civil liability when their models are used to cause catastrophic harm - including events involving the death or serious injury of 100 or more people, or at least $1 billion in property damage. The move marks a notable shift in OpenAI's legislative posture, from one of largely defending against restrictive bills to actively supporting a measure critics describe as unusually permissive.
The bill, Illinois Senate Bill 3444, was introduced on February 4, 2026, by State Senator Bill Cunningham, a Democrat. It has been making its way through the Illinois Senate ever since, moving from the Assignments committee to the AI and Social Media committee on February 18. A committee deadline was most recently set for April 24, 2026.
What SB3444 actually says
The text of SB3444 creates a new statute called the Artificial Intelligence Safety Act. Its central provision is a liability shield: a developer of a frontier AI model cannot be held responsible for "critical harms" caused by that model, provided two conditions are met. First, the developer must not have intentionally or recklessly caused those harms. Second, the developer must have published both a safety and security protocol and a transparency report on its website before the model's release.
Critical harm is defined precisely. According to the bill text, it means the death or serious injury of 100 or more people, or at least $1,000,000,000 in property damage, caused or materially enabled by a frontier model. The harm must have occurred through one of two routes: either through the creation or use of a chemical, biological, radiological, or nuclear weapon, or through a frontier model acting without meaningful human intervention in a manner that, if committed by a human, would constitute a criminal offense requiring intent, recklessness, negligence, or the aiding and abetting of such a crime.
Frontier model is defined with equal precision. According to the bill, a model qualifies as a frontier model if it was trained using more than 10^26 computational operations - such as integer or floating-point operations - or if its compute cost exceeded $100,000,000. That threshold would, according to reporting by Wired on April 9, 2026, likely apply to America's largest AI labs, including OpenAI, Google, xAI, Anthropic, and Meta.
The liability shield does not apply unconditionally to every release. The bill specifies that the safety and security protocol requirement can be waived if the developer does not "reasonably foresee any material difference" between a new model's capabilities or critical harm risks and a previously evaluated model. This creates a meaningful carve-out, particularly for companies releasing incremental updates to existing systems.
Two routes to compliance
SB3444 offers developers a choice of two paths to be deemed compliant. The first is agreeing to be bound by the safety and security requirements adopted under Article 56 of the European Union's Artificial Intelligence Act - the provision governing general-purpose AI models with systemic risk. PPC Land has documented that provision extensively, including the EU's General-Purpose AI Code of Practice, which was finalized on July 10, 2025, and covers transparency, copyright, and safety obligations for model providers.
The second path involves entering into an agreement directly with a federal government agency. Such an agreement must allow the agency to access the developer's frontier models for research and evaluation purposes, facilitate cyber and biological risk assessments, and allow the federal government to publicly release information related to evaluations of publicly released models. Any developer taking this route must file a certification with the Illinois Attorney General attesting compliance. Making false or misleading statements in that certification constitutes a violation of the Act.
The bill also contains a sunset clause. It will stop applying if the federal government enacts a law or adopts regulations establishing overlapping requirements for frontier model developers. Compliance with a substantially similar framework in another state also satisfies SB3444's requirements.
What the safety and security protocol must contain
The bill sets out detailed requirements for the safety and security protocol that developers must publish. According to the text, the protocol must document a developer's technical and organizational measures to manage, assess, and mitigate critical harm risks. It must include a high-level summary covering at least seven areas: testing procedures for foreseeable critical harms; risk thresholds and the actions the developer will take when those thresholds are reached; mitigation measures and how the developer evaluates their effectiveness; whether and how third parties are used to assess risk; cybersecurity practices, including how unreleased model weights are protected from unauthorized modification or transfer; monitoring and response procedures for deployed models; and the process by which the developer determines when a model presents new risks warranting further assessment.
Developers are permitted to redact portions of the protocol to protect trade secrets, proprietary information, and model security - a provision that limits public scrutiny of compliance but acknowledges commercial realities.
The transparency report has a narrower scope. It must identify the frontier model, and provide a summary of safety assessment results along with the steps taken to address identified risks. Again, redactions are permitted on the same grounds.
OpenAI's position - and Anthropic's
OpenAI has been explicit in its support. Caitlin Niedermeyer, a member of OpenAI's Global Affairs team, testified in favour of SB3444 in Illinois. According to Wired's April 9 reporting, Niedermeyer argued that federal AI regulation is essential to avoid what she described as "a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety." She also stated that state-level laws can be effective if they "reinforce a path toward harmonization with federal systems."
In an emailed statement, OpenAI spokesperson Jamie Radice said: "We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses - small and big - of Illinois."
Anthropic, notably, is opposing the bill. The divergence between OpenAI and Anthropic is striking given that both companies are frequently mentioned together as leading frontier AI developers. According to analysis posted on LinkedIn by Elena Gurevich, an AI and intellectual property attorney, the bill's approach represents a shift in how liability is constructed: "The Illinois AI Safety Act does not simply fill a federal gap - it reshapes liability. If AI is treated as merely a 'component of a system,' then where is the legal subject of responsibility? Without a subject, liability is diluted and reduced to compliance rituals. That is not classical tort logic. It is a safe harbor."
Gurevich also flagged weaknesses in the bill's definition of an AI model. The definition in the bill reads: "an engineered or machine-based component of a system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments." She noted that describing a model as merely a "component" of a system diverges from other US state bills on frontier AI, including California's SB53 and the RAISE Act, and uses the OECD definition rather than the EU AI Act's framing - potentially covering a range of non-AI components that happen to be integrated into an AI system.
The political context inside Illinois
Illinois is not a state with a lenient track record on technology regulation. Scott Wisor, policy director for the Secure AI project, told Wired that the bill has a slim chance of passing. "We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it," Wisor said. "There's no reason existing AI companies should be facing reduced liability."
The state was the first in the country to pass legislation limiting the use of AI in mental health services, which it did last August. Illinois also passed the Biometric Information Privacy Act in 2008, one of the earliest and most consequential state-level data privacy laws in US history.
Senator Cunningham himself acknowledged the bill's current form may not survive intact. According to Politico's reporting from April 14, 2026, he wrote in an email that the bill may be modified before a vote, and that it is "highly unlikely that the final product will include sweeping liability relief for AI developers." He added that "Illinois has a long history of holding corporations responsible for negligence. That won't change for the AI industry."
The broader liability debate
SB3444 sits within a wider national conversation about who pays when AI causes catastrophic harm. Other states are moving in the opposite direction. Bills introduced in New York and Rhode Island would impose greater liability on developers. Their sponsor, Gabriel Weil, a tort and AI law professor at Touro University, told Politico: "It's very problematic to cut off the liability of these companies. We need the threat of liability to give them incentive to worry about those risks."
The contrast with other industries is instructive. Laws governing oil spills and nuclear plant accidents impose strict liability - meaning companies are responsible regardless of intent. The Price-Anderson Act, which governs nuclear disasters, led to the creation of insurance pools funded by major nuclear companies, which kick in once primary insurance is exhausted. After the partial meltdown at Three Mile Island in 1979, those pools paid roughly $71 million to cover claims and litigation costs. SB3444, by contrast, requires proof of intentional or reckless conduct before liability attaches - a meaningfully higher bar for plaintiffs.
Charlie Bullock, a senior research fellow at the Institute for Law and AI, suggested to Politico that requiring AI companies to carry insurance would be a more constructive intermediate step. "Requiring [AI] companies to carry insurance would be a decent step for these catastrophes," Bullock said. "That would incentivize the insurance industry to try to figure out what the risks are realistically."
Weil raised a further concern about importing the nuclear model directly. At this point, he noted, scientists have a reasonably clear understanding of what regulations are needed to prevent nuclear disasters. That level of consensus does not yet exist for AI. "Having government step in to cover the [AI] tail risks without pairing that with a regulatory approach that we have in the nuclear context would be like giving up on policy for mitigating those risks," he told Politico.
An OpenAI spokesperson told Politico in a statement: "In the absence of federal action, we will continue to work with states - including Illinois - to work towards a consistent safety framework, including with enforcement mechanisms that provide similar penalties as California and New York for non-compliance."
Federal preemption and the compute threshold gap
The bill's self-deactivation clause - which would make it inoperative if Congress passes overlapping rules - matters because federal AI legislation remains elusive. The Trump administration released a seven-pillar national AI policy framework in March 2026, calling on Congress to preempt state AI laws. The document stated that "States should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications." As PPC Land reported at the time, the framework does not carry the force of law and was described as a set of legislative recommendations rather than a binding directive.
The state-federal tension has been playing out in multiple jurisdictions. In December 2025, a Tennessee senator introduced a bill that would make certain AI training practices a felony, illustrating how states are legislating in the absence of federal standards - and how different those approaches can be from one another.
There is also a technical gap worth noting. The EU AI Act's threshold for general-purpose AI models with systemic risk is 10^25 floating-point operations - one order of magnitude below SB3444's compute threshold of 10^26. PPC Land has reported extensively on how that EU threshold was set, and how it was deliberately calibrated to capture the most powerful models in deployment today. Illinois's higher threshold would potentially exclude some models covered by the EU framework, creating a narrower liability shield than it might initially appear.
Significance for the marketing and ad tech community
The question of AI liability has direct relevance for marketing technology professionals, even if the immediate focus of SB3444 is on catastrophic physical and financial harm rather than commercial harms. The legal frameworks being constructed now will define the broader environment in which AI advertising tools operate. If frontier model developers face reduced liability for critical harms, the pressure to self-regulate - and to invest in safety protocols with commercial bearing - shifts accordingly.
Marketers and ad tech operators who rely on frontier models built by companies like OpenAI, Google, or Anthropic have a stake in understanding how those developers are governed. Safety protocols mandated under SB3444 - if enacted - would require developers to document testing procedures, risk thresholds, and mitigation measures. That documentation would, at least partially, be public. Transparency reports published as a condition of the liability shield would accompany each frontier model release.
The Law Commission of England and Wales has previously identified liability gaps where autonomous AI systems cause harm but no specific person bears legal responsibility. SB3444's critics argue that the Illinois bill deepens exactly that gap, by removing civil liability as a driver of caution while replacing it with documentation requirements that may be partially redacted and are not independently audited.
The bill has a committee deadline of April 24, 2026. Whether it passes in its current form, is amended substantially, or dies in committee will be closely watched by AI policy advocates on all sides of the debate.
Timeline
- 2008 - Illinois passes the Biometric Information Privacy Act, one of the earliest state data privacy laws in the US
- July 10, 2025 - EU General-Purpose AI Code of Practice finalized, covering frontier model safety and transparency obligations under Article 56
- August 2025 - Illinois becomes the first US state to pass legislation limiting the use of AI in mental health services
- August 25, 2025 - 44 state attorneys general issue formal warnings to 12 major AI companies including OpenAI and Anthropic over child safety failures
- December 18, 2025 - Tennessee introduces a bill that would make certain AI training practices a felony
- February 4, 2026 - Illinois Senate Bill 3444 filed by Senator Bill Cunningham; first reading and referral to Assignments committee
- February 17, 2026 - SB3444 assigned to Executive committee
- February 18, 2026 - SB3444 moved to AI and Social Media committee
- March 13, 2026 - Committee deadline established as March 27, 2026
- March 22, 2026 - White House releases national AI policy framework urging Congress to preempt state AI laws
- March 27, 2026 - New committee deadline established as April 24, 2026
- April 9, 2026 - Wired reports OpenAI's active support for SB3444; Anthropic's opposition noted
- April 14, 2026 - Politico reports on the liability debate around SB3444, including expert views on insurance and strict liability comparisons
- April 24, 2026 - Illinois Senate committee deadline for SB3444
Summary
Who: OpenAI, in support, and Anthropic, in opposition, are the two most prominent frontier AI developers with declared positions on Illinois Senate Bill 3444. The bill was introduced by Democratic State Senator Bill Cunningham. Tort and AI law scholars, AI policy advocates, and technology attorneys are actively commenting on its implications.
What: SB3444, formally titled the Artificial Intelligence Safety Act, would create a civil liability shield for frontier AIdevelopers - defined as those whose models are trained using more than 10^26 compute operations or at a cost exceeding $100 million - for critical harms caused by their models, provided the developer did not act intentionally or recklessly and published safety and transparency documentation before release.
When: The bill was introduced on February 4, 2026, and is currently subject to a committee deadline of April 24, 2026, in the Illinois Senate during the 104th General Assembly session.
Where: Illinois, in the United States. The bill applies to frontier model developers as a matter of state civil law, though it would cease to apply if the federal government enacts overlapping regulations.
Why: The bill responds to a gap in US law: there is currently no federal or state statute specifically determining whether AI model developers can be held civilly liable when their models are used to cause catastrophic harm. OpenAI and the bill's sponsor argue the approach encourages consistent safety standards. Critics - including legal scholars, Anthropic, and a majority of polled Illinois residents - argue that removing civil liability reduces the incentive for developers to prevent those harms in the first place.