The Trump administration last week moved to block a Utah AI transparency bill that would require the largest artificial intelligence developers to publish safety plans, protect child users, and shield whistleblowers - marking the first known instance of the White House intervening directly with a Republican-led state legislature to suppress AI regulation.

In a letter dated February 12, 2026, obtained by Axios, the White House Office of Intergovernmental Affairs wrote to Utah Senate Majority Leader Kirk Cullimore Jr. stating its opposition to House Bill 286, the Artificial Intelligence Transparency Act. "We are categorically opposed to Utah HB 286 and view it as an unfixable bill that goes against the Administration's AI Agenda," the letter reads, according to Axios. The White House and Cullimore's office did not respond to multiple requests for comment from Axios.

The bill was introduced on January 19, 2026 by Republican state Representative Doug Fiefia. It targets what the legislation calls frontier developers - companies that have used at least 10²⁶ integer or floating-point operations (FLOPs) to train a model, a threshold intended to capture only the most computationally intensive AI systems at the technological frontier. Because frontier AI companies do not publicly disclose their compute data, third-party researchers would effectively serve as the arbiters of who meets that threshold.

What the bill actually requires

The compliance obligations under HB286 split into two distinct parts: transparency requirements and whistleblower protections.

On transparency, the bill targets a narrower category: large frontier developers, defined as frontier developers with annual revenue of at least $500 million in the preceding calendar year, together with their affiliates. These companies would be required to write, implement, and publicly post a public safety plan describing in detail how they assess and mitigate what the bill terms "catastrophic risks." The bill defines "catastrophic loss" precisely: more than 50 deaths or serious injuries, or property damage exceeding $1 billion in a single incident.

Separately, large frontier developers that operate covered chatbots - AI services accessible to minors with at least one million monthly active users - must publish a child protection plan. These plans must address child safety risks, use third-party evaluators to assess those risks, and be updated whenever there are material modifications to the underlying model.

Before deploying a new or substantially modified frontier model, companies must publish summaries of their risk assessments. Before deploying a new or substantially modified foundation model as part of a covered chatbot, developers must publish summaries of child safety risk assessments, the results of those assessments, and the extent to which third-party evaluators were involved.

Safety incident reporting obligations are strict. A large frontier developer that discovers a safety incident must report it to Utah's Office of Artificial Intelligence Policy within 15 days. For critical safety incidents posing imminent risk of death or serious physical injury, the window shrinks to 24 hours - and the disclosure must go directly to law enforcement or a public safety agency with appropriate jurisdiction. Internal use of frontier models also triggers a quarterly reporting cycle to the same office.

The bill prohibits frontier developers from making materially false or misleading statements about their management of covered risks, though a good-faith exception applies where the statement was reasonable under the circumstances.

Civil penalties are capped at $1 million for a first violation and $3 million for each subsequent violation. Enforcement authority rests exclusively with the attorney general - there is no private right of action on the transparency provisions. Penalty revenue would flow into a newly created AI Transparency Enforcement Restricted Account within Utah's General Fund, used to finance investigations, expert witnesses, technical advisors, and coordination with federal agencies.

Whistleblower protections

The bill's second part creates federal-style whistleblower protections for employees at frontier AI companies. Companies must establish anonymous internal reporting channels. They are prohibited from retaliating against employees who report safety concerns, and they cannot use contracts or non-disclosure agreements to prevent reporting to the Office of Artificial Intelligence Policy. Employees who face retaliation may sue for reinstatement, twice the amount of back pay owed, and attorney's fees. The statute of limitations for such claims is four years from the violation, or two years from when the employee knew or should have known about the violation.

The bill includes a severability clause. If any provision is struck down, the remainder of the law survives independently.

The White House's position

According to Axios, White House officials held several conversations with Fiefia over the two weeks preceding the letter, urging him not to move the bill forward. The administration did not offer specific legislative changes that could make the bill acceptable. According to Axios, a source who cited conversations with the White House said that the White House official indicated "there's nothing Fiefia can do to make him happy."

The letter itself does not outline a legal rationale for opposing the bill - making it, as Axios noted, an unusual intervention into state matters.

Fiefia responded publicly. "I appreciate the White House's engagement on this issue and look forward to continuing the dialogue. While we did not fully align on the path forward, I believe transparency, accountability, and clear guardrails must be foundational to any responsible AI policy," he said in a statement published by Axios. "I'm hopeful we can find common ground that allows AI to flourish while ensuring strong protections for children," he added.

Broader context: The federal preemption push

The White House's intervention in Utah has a backstory. President Trump signed an executive order late last year directing the Justice Department to create an AI litigation task force, with a mandate to identify state AI legislation deemed incompatible with the federal administration's approach and launch legal challenges. The executive order named only one state by name - Colorado - which led many observers to expect that challenges would focus on Democratic-controlled legislatures.

Utah complicates that picture sharply. The state is Republican-led, and Fiefia himself is a Republican. His bill's focus on child protection in particular has been seen as politically centrist, even bipartisan. According to the Axios report, Utah's bill echoes California's AI transparency law - legislation that White House AI and crypto czar David Sacks has criticized as contributing to a patchwork of state regulations.

Both measures require safety disclosures from frontier AI companies, and both could effectively function as de facto national standards by requiring any company operating at scale in those states to comply. That dynamic is what appears to alarm the administration: not merely California's law, but the prospect of multiple states, including Republican ones, creating overlapping and potentially incompatible mandates.

Significance for the marketing and advertising technology community

For marketing professionals and digital advertising practitioners, the Utah bill's trajectory is a leading indicator of the regulatory environment governing the AI tools that now underpin much of their work. Frontier AI models trained at the 10²⁶ FLOP threshold are precisely the systems underpinning the most capable generative AI products - the same models increasingly embedded in advertising platforms, creative automation, and audience targeting.

The child protection provisions in HB286 intersect with a wave of state and federal enforcement actions that accelerated through 2025. The Federal Trade Commission issued orders in September 2025 requiring seven AI chatbot companies to submit detailed safety reports. The FTC investigation examined platform monetization models and user engagement metrics alongside child safety practices. Character.AI faced federal litigation after court documents detailed chatbot interactions that promoted self-harm among minors, as covered by PPC Land.

The whistleblower provisions in HB286 are no less significant for companies deploying AI at scale in advertising. If employees at major AI developers gain legally protected channels to report safety concerns without fear of NDA-enforced silence, the volume of disclosed information about model behavior could increase substantially - potentially affecting how advertisers assess risk when selecting AI vendors.

The question of whether states can impose such requirements remains contested. The European Commission's consultation on AI transparency guidelines, opened in September 2025, set a parallel track internationally. The EU's AI Act transparency obligations under Article 50 become applicable in August 2026, and 44 US Attorneys General had already coordinated a formal letter to 12 major AI companies demanding child safety accountability in August 2025. That pressure came from both parties, and from state officials across 44 jurisdictions.

The Utah bill's effective date, if enacted, would be May 6, 2026.

Timeline

  • January 19, 2026 - Utah Representative Doug Fiefia introduces HB286, the Artificial Intelligence Transparency Act, during the 2026 General Session of the Utah Legislature.
  • January 28, 2026 - The first substitute version of HB286 (1st Sub. H.B. 286) is formally proposed, with Senate Sponsor Michael K. McKell listed alongside Chief Sponsor Doug Fiefia.
  • February 12, 2026 - The White House Office of Intergovernmental Affairs sends a letter to Utah Senate Majority Leader Kirk Cullimore Jr. categorically opposing HB286 and labeling it "unfixable."
  • February 15, 2026 - Axios publishes its scoop on the White House letter and the behind-the-scenes conversations between federal officials and Fiefia. Fiefia releases a public statement acknowledging the disagreement.
  • May 6, 2026 - Effective date of HB286 if enacted into law without further delay.

Related PPC Land coverage

Summary

Who: The Trump White House, specifically its Office of Intergovernmental Affairs, along with Utah Republican state Representative Doug Fiefia, Senate Majority Leader Kirk Cullimore Jr., and the large frontier AI developers (companies with at least $500 million in annual revenue and models trained at 10²⁶ FLOPs) that would be subject to HB286.

What: The White House sent a letter on February 12, 2026 to a Utah Republican senator categorically opposing HB286, the Artificial Intelligence Transparency Act - a state bill that would require large AI developers to publicly post safety and child protection plans, report safety incidents to regulators within specific timeframes, and guarantee whistleblower protections for employees raising safety concerns. The White House held multiple conversations with the bill's sponsor urging him to drop it, without offering specific amendments that could make it acceptable.

When: The bill was introduced January 19, 2026. The White House letter was sent February 12, 2026. Axios published its reporting on February 15, 2026. If enacted, the law would take effect May 6, 2026.

Where: Utah, a Republican-controlled state, making the White House's direct pressure on a state legislature within its own party particularly notable. The bill targets frontier AI developers regardless of their location, as long as they deploy covered models to Utah users.

Why: The Trump administration has been working to prevent a patchwork of state-level AI regulations from creating compliance obligations that diverge from its federal AI agenda. The executive order signed late in 2025 directed the Justice Department to challenge state AI laws deemed incompatible with that agenda. Utah's bill - like California's AI transparency law - would require frontier developers to disclose safety assessments and publish child protection plans, potentially setting standards that apply nationally to any company wishing to operate in those states.

Share this article
The link has been copied!