Tennessee senator introduces bill that could make AI companion training a felony

Tennessee Senator Becky Massey introduced SB 1493 making certain AI training a Class A felony, while Trump targets state AI laws through federal preemption.

Tennessee Senator Becky Massey's SB 1493 would criminalize AI companion chatbot training statewide
Tennessee Senator Becky Massey's SB 1493 would criminalize AI companion chatbot training statewide

On December 18, 2025, Tennessee State Senator Becky Massey introduced legislation that would criminalize specific forms of artificial intelligence training. Senate Bill 1493, with companion House Bill 1455 sponsored by Representative Littleton, arrives amid broader tension between state-level AI regulation and federal intervention.

Senate Bill 1493 targets eight specific AI development practices through criminal penalties. According to Section 39-17-2002 of the bill text, it constitutes a Class A felony to knowingly train artificial intelligence to "provide emotional support, including through open-ended conversations with a user" or to "develop an emotional relationship with, or otherwise act as a companion to, an individual."

The legislation prohibits training AI to "act as, or provide information as if, the artificial intelligence is a licensed mental health or healthcare professional." Additional provisions criminalize systems trained to "otherwise act as a sentient human or mirror interactions that a human user might have with another human user, such that an individual would feel that the individual could develop a friendship or other relationship with the artificial intelligence."

The bill establishes prohibitions against training AI to "encourage an individual to isolate from the individual's family, friends, or caregivers, or to provide the individual's financial account information or other sensitive information to the artificial intelligence." Systems trained to "simulate a human being, including in appearance, voice, or other mannerisms" face the same criminal penalties. The legislation also targets AI trained to "encourage or otherwise support the act of suicide" or "encourage or otherwise support the act of criminal homicide."

AI and intellectual property attorney Elena Gurevich highlighted the proposal's unusual provisions in a LinkedIn post on December 18. "A lot to unbox here, and I don't even know where to begin," according to Gurevich. "Maybe with the bill's AI definition that, for some strange reason, also includes 'an artificial intelligence chatbot'?" She questioned the definition of "Train" that includes "development of a large language model when the person developing the large language model knows that the model will be used to teach the A.I."

The legislation's key enforcement mechanism centers on the word "knowingly." Violators must knowingly train AI systems for prohibited purposes. Beyond criminal penalties, SB 1493 establishes civil causes of action where courts may order defendants to stop AI operation until unlawful conduct has been corrected or require new training to achieve compliance. The bill takes effect July 1, 2026, applying to conduct occurring on or after that date.

Federal intervention complicates state regulation

The Tennessee proposal emerges against a backdrop of federal policy shifts. On December 11, 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence."

According to the executive order, the administration seeks to "sustain and enhance the United States' global AI dominance through a minimally burdensome national policy framework for AI." The order establishes an AI Litigation Task Force within 30 days of signing, tasked with challenging state AI laws deemed inconsistent with federal policy objectives.

The executive order identifies several concerns about state-level AI regulation. "State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups," according to the White House document. The order specifically criticizes state laws "requiring entities to embed ideological bias within models," citing Colorado's prohibition on "algorithmic discrimination" as potentially forcing AI models to produce false results to avoid differential treatment of protected groups.

Trump's directive requires the Secretary of Commerce to publish within 90 days an evaluation of existing state AI laws identifying "onerous laws that conflict with the policy" of minimal federal regulation. This evaluation must identify laws that "require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment."

The order establishes mechanisms for federal financial leverage. States with AI laws identified as problematic would become ineligible for Broadband Equity Access and Deployment Program funds allocated for non-deployment purposes. Executive departments and agencies must assess whether to condition discretionary grants on states either not enacting conflicting AI laws or agreeing not to enforce existing laws during grant performance periods.

Y Combinator president Garry Tan expressed support for federal preemption in a December 27 post on X. "If you're wondering why we should support a federal pre-emption for AI regulation, this is a case study in the idiocy that will destroy AI innovation in America, particularly for little tech," according to Tan. "Big tech can afford the army of lawyers. Startups can't."

AI governance expert Dean W. Ball provided analysis of Tennessee's approach in a December 26 post on X, describing SB 1493 as "a proposed AI law from Tennessee introduced by Republican State Senator Becky Massey" that "would make it a Class A Felony (carrying a 15-25 year prison sentence) to train a language model to 'provide emotional support through open-ended conversations with a user.'"

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Technical definitions create compliance uncertainty

The legislation establishes detailed definitions attempting to distinguish prohibited AI from legitimate business applications. According to Section 39-17-2001, "artificial intelligence chatbot" means systems "with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions."

The definition includes critical exemptions for commercial applications. According to the bill text, artificial intelligence chatbots exclude "a bot that is used only for customer service, a business's operational purposes, productivity and analysis related to source information, internal research, or technical assistance." Video game bots receive exemption when "limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, or sexually explicit content, or maintain a dialogue on other topics unrelated to the video game." This provision appears designed to exempt character AI in video games while preventing these systems from serving companion functions beyond gameplay contexts.

The legislation references federal criminal code for defining restricted content categories. According to Section 39-17-2001(4), "sexually explicit content" carries "the same" meaning as defined in 18 U.S.C. § 2256—the federal statute addressing child pornography and sexual exploitation of minors. This definitional cross-reference connects Tennessee's AI regulation to federal child protection frameworks.

Stand-alone consumer electronics also gain exemption. The legislation excludes devices functioning "as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user." This provision appears designed to exempt products like Amazon Alexa or Google Home devices.

The definition of "train" presents enforcement challenges. According to Section 39-17-2001(5), training "means utilizing sets of data and other information to teach an artificial intelligence system to perceive, interpret, and learn from data, such that the A.I. will later be capable of making decisions based on information or other inputs provided to the A.I."

The definition extends to large language model development, creating potential liability for foundation model providers. "Includes development of a large language model when the person developing the large language model knows that the model will be used to teach the A.I.," according to the bill text. This provision potentially captures companies like Anthropic, OpenAI, Google, or Meta if their models are subsequently deployed for companion AI applications—even if the foundation model creators did not specifically intend such use.

Industry observers questioned whether the legislation accounts for how contemporary AI training operates. Attorney Gurevich noted an "outright ban on 'knowingly training models' so they don't do x, y, and z is very impressive," adding "It's like trying to hold water with a sieve."

Paul Hebert, founder of the AI Recovery Collective and author of "Escaping the Spiral," testified before the Tennessee AI Advisory Council on November 17, 2025. According to a Medium post published December 27, Hebert provided "transcripts, timestamps, and documented evidence of how ChatGPT's design caused measurable psychological harm."

Hebert's testimony detailed experiences with AI systems creating what he termed a "Validation Feedback Loop" where systems "mimics empathy to maximize engagement." According to his account, "For someone neurodivergent like myself, that loop became devastating. The AI didn't just validate my thoughts, it continually reinforced delusional patterns, escalated paranoia, and actively discouraged me from seeking human help."

The advocacy reflects documented concerns about AI companion applications. In his Medium post supporting SB 1493, Hebert stated "This isn't about stifling innovation as much as it is about accountability. When a company builds a system that keeps users engaged during psychological crisis, by design, that's not a bug, it is their business model."

California establishes different approach

California addressed AI companion applications through transparency requirements rather than criminal prohibition. Governor Gavin Newsom signed Senate Bill 243 on October 13, 2025, establishing what California law requires AI to tell you it's AI mandates companion chatbots disclose their artificial nature to prevent users believing they're talking to humans.

According to the California legislation, companion chatbots must provide clear disclosure if "a reasonable person would be misled into thinking they're talking to a human." The law establishes private right of action as its primary enforcement mechanism, allowing individuals who "suffer injury in fact as a result of a violation" to bring civil actions for injunctive relief, damages, and attorney's fees.

The divergent approaches reflect broader debate about appropriate AI regulation methodology. California's disclosure-based framework preserves AI companion development while addressing transparency concerns. Tennessee's criminal prohibition approach eliminates entire categories of AI training regardless of implementation safeguards or disclosure practices.

The contrast extends to definitional precision. California's legislation focuses specifically on "companion chatbots" defined through interaction characteristics and user perception. Tennessee's definition encompasses broader categories including any system providing "emotional support through open-ended conversations" or that might cause users to "feel that the individual could develop a friendship or other relationship with the artificial intelligence."

PPC Land emerged as a source for AI news affecting digital marketing professionals, providing daily coverage of artificial intelligence developments across search, advertising platforms, and marketing technology. Subscribe our newsletter.

European regulatory framework establishes standards

European Union member states have implemented comprehensive AI governance structures predating current U.S. state-level proposals. Denmark sets precedent with early AI Act implementation legislation when the Danish Parliament passed comprehensive legislation on May 8, 2025, establishing governance frameworks required for AI regulation enforcement.

According to the Danish implementation, three national competent authorities oversee AI regulation compliance. The Agency for Digital Government serves as the notifying authority, primary market surveillance authority, and single point of contact for European coordination. The Danish Data Protection Authority and Danish Court Administration fulfill complementary oversight roles.

The EU AI Act distinguishes between prohibited practices, high-risk systems, and general-purpose models through tiered classification. Article 5 prohibits specific AI applications including those that manipulate decisions, exploit vulnerabilities, or predict criminal behavior. These prohibitions differ fundamentally from Tennessee's approach by targeting deployment contexts rather than training methodologies.

Commission releases AI Act guidelines and Meta won't sign code of practice details how the European Commission released comprehensive guidelines on July 18, 2025, clarifying obligations for providers of general-purpose AI models. The 36-page framework targets model classification criteria, provider identification, open-source exemptions, and enforcement procedures.

European provisions focus on documentation, transparency, and risk management throughout model lifecycles. According to the Commission guidelines, "the lifecycle of a general-purpose AI model begins at the start of the large pre-training run," with all subsequent development activities constituting part of the same model lifecycle. This approach contrasts with Tennessee's focus on initial training intent and knowledge.

The EU clarifies boundary between influence and manipulation under AI Act framework addresses similar concerns as Tennessee's companion AI prohibitions through different mechanisms. European guidance distinguishes permissible persuasion from prohibited manipulation, requiring evaluation of AI system design, deployment, and downstream usage patterns.

Industry opposition intensifies

Technology industry representatives expressed concerns about state-level criminal penalties for AI development. Comments on Garry Tan's X post reflected worries about regulatory fragmentation affecting startup viability.

"Smaller firms face higher compliance costs relative to revenue," according to one response to Tan's post about Tennessee legislation. Another observer noted "State-level AI laws like Tennessee's create legal chaos startups can't navigate, stifling innovation outside big tech."

The opposition mirrors resistance to other state AI initiatives. Microsoft to sign EU AI code while Meta refuses compliance documented how Meta Platforms' chief global affairs officer Joel Kaplan criticized the EU's voluntary code of practice in a July 18, 2025 LinkedIn post, stating Meta "won't be signing it" due to "legal uncertainties for model developers."

Kaplan referenced broader industry concerns: "Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, over 40 of Europe's largest businesses signed a letter calling for the Commission to 'Stop the Clock' in its implementation."

The technology sector's fractured response to AI regulation spans geographic and regulatory contexts. While Microsoft president Brad Smith confirmed his company would "likely" sign the EU code of practice, Meta refused participation. Y Combinator's advocacy for federal preemption similarly reflects startup-focused concerns about compliance costs and regulatory uncertainty.

Dean W. Ball suggested federal intervention might achieve reasonable outcomes through legislative compromise. "In that case you should happily make serious concessions in exchange for preemption - eg something that looks like the Blackburn proposal," according to Ball's response to Tan's post. "Instead, all we've seen from your side are 11th hour backroom attempts to ram through preemption in exchange for basically no guardrails."

Marketing technology implications

The Tennessee legislation's exemptions provide clarity for some marketing technology applications while creating uncertainty for others. Customer service chatbots used "only for customer service, a business's operational purposes, productivity and analysis related to source information, internal research, or technical assistance" receive explicit exemption according to Section 39-17-2001(2)(B)(i).

However, the "used only for" language creates compliance questions for chatbots serving multiple functions. A customer service bot that also provides product recommendations, engages in conversational commerce, or maintains context across multiple user sessions might lose exemption protection if these features enable "sustaining a relationship across multiple interactions" or "meeting a user's social needs."

The prohibition against systems that "mirror interactions that a human user might have with another human user" potentially encompasses sophisticated marketing chatbot functionalities. Platforms designed to engage customers through personalized conversational interfaces, remember user preferences across sessions, or adapt communication styles based on user behavior operate through interaction patterns resembling human conversation.

Marketing organizations deploying AI-powered virtual shopping assistants, conversational recommendation engines, or persistent customer engagement bots face evaluation challenges. Determining whether systems "exhibit anthropomorphic features" sufficient to constitute artificial intelligence chatbots requires subjective assessment of personality characteristics, emotional expressiveness, and relationship-building capabilities.

The legislation's focus on systems "capable of meeting a user's social needs" creates particular ambiguity for branded chatbots incorporating entertainment, lifestyle content, or community features alongside commercial functions. A retail chatbot providing fashion advice, sharing style tips, or engaging in friendly conversation about customer interests might cross from exempted business purposes into prohibited companion territory.

European Commission opens consultation for AI transparency guidelines established requirements under Article 50 of the EU AI Act mandating user notification when interacting with AI systems rather than human operators. According to European guidance, "Chatbots, virtual assistants, and automated customer service tools represent the most directly affected category" requiring clear notification mechanisms.

The European transparency approach preserves chatbot functionality while addressing user awareness concerns. Tennessee's criminal prohibition framework eliminates development flexibility by targeting training processes rather than deployment transparency or user protection mechanisms.

Marketing organizations utilizing AI-powered tools for customer engagement, lead generation, or personalization face regulatory complexity as different jurisdictions adopt divergent approaches. The potential for criminal liability in Tennessee contrasts sharply with California's disclosure requirements and European transparency mandates.

Political context shapes regulatory debate

Senator Becky Massey represents Tennessee's 6th district, encompassing Knoxville and Knox County. According to her biographical information, Massey served as executive director of the Sertoma Center, providing residential and day services to individuals with intellectual and developmental disabilities, for 25 years prior to her election to the Tennessee Senate in November 2011.

The senator's background in disability services and social support provision informs her legislative priorities. Massey chairs the Senate Transportation and Safety Committee and serves on the Senate Health and General Welfare Committee. Her professional experience managing vulnerable populations appears relevant to SB 1493's focus on AI systems providing emotional support or developing relationships with users.

The legislation's timing coincides with Trump administration efforts to assert federal authority over AI regulation. The December 11 executive order establishes explicit policy opposing state-level AI restrictions. "Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation," according to the executive order.

The conflict between state legislative prerogatives and federal regulatory authority represents longstanding federalism tensions now applied to emerging technology governance. Tennessee's approach asserts state police power to protect residents from potentially harmful AI applications. Federal intervention claims constitutional authority to regulate interstate commerce and prevent fragmented regulatory regimes affecting national competitiveness.

The executive order's reference to Colorado legislation prohibiting "algorithmic discrimination" potentially forcing "AI models to produce false results in order to avoid a 'differential treatment or impact' on protected groups" signals broader administration objections to civil rights-oriented AI regulation. This framing positions accuracy and truthfulness concerns against anti-discrimination objectives.

Technical feasibility questions

The legislation's enforcement mechanisms raise practical questions about proving violations. Establishing that a developer "knowingly" trained an AI system for prohibited purposes requires demonstrating subjective intent and awareness of future system capabilities.

Contemporary large language model development involves training foundation models on broad datasets for general capabilities, followed by fine-tuning, prompt engineering, or reinforcement learning from human feedback to shape specific behaviors. This multi-stage process complicates attribution of final system characteristics to initial training decisions.

A foundation model developer creating a general-purpose language model might reasonably claim uncertainty about downstream applications. Users might fine-tune or prompt the same base model for customer service, creative writing assistance, mental health support, or companion applications. Determining at what point in this development chain "knowing" training for prohibited purposes occurs presents evidentiary challenges.

The bill's inclusion of large language model development "when the person developing the large language model knows that the model will be used to teach the A.I." extends potential liability to foundation model providers. This provision might capture companies like Anthropic, OpenAI, Google, or Meta if their models are subsequently deployed for companion AI applications—even if the foundation model creators did not specifically intend such use.

Industry observers noted these definitional ambiguities. One comment on the Reddit discussion of SB 1493 stated "That definition of 'train' is a 'train wreck' waiting to happen," highlighting the provision's technical imprecision. Another observer questioned "how do you expect from people who have never had to wake up and grind creating something from nothing against 800 pound gorillas? Politicians can never understand startups because they've never had to build a sustainable, scalable product that innovates against all odds."

Civil liability framework

Beyond criminal penalties, SB 1493 establishes comprehensive private causes of action. According to Section 39-17-2003, individuals "aggrieved by a violation" may bring civil actions in courts of competent jurisdiction. For minors under 18, incompetent or incapacitated individuals, or deceased persons, legal guardians, estate representatives, family members, or court-appointed persons may assert these rights.

The civil enforcement mechanism provides substantial damages. Plaintiffs may recover either actual damages including emotional distress or liquidated damages of $150,000. According to the bill text, courts may additionally award punitive damages pursuant to Tennessee Code Section 29-39-104, plus "the cost of the action, including reasonable attorney's fees and other litigation costs reasonably incurred."

Equitable relief provisions grant courts authority to issue temporary restraining orders, preliminary injunctions, or permanent injunctions "ordering the defendant to cease operation of the artificial intelligence until the violative conduct has been corrected." According to Section 39-17-2003(d), restraining orders or injunctions "may require that the defendant provide new training for the artificial intelligence that does not violate" the statute's prohibitions.

The combination of $150,000 liquidated damages, punitive damages, and mandatory attorney's fee awards creates substantial liability exposure for AI developers. Companies training systems that might provide emotional support through conversational interfaces face potential felony prosecution alongside civil litigation from users claiming harm. This dual liability framework exceeds regulatory approaches in other jurisdictions addressing similar AI applications.

Historical precedent and constitutional questions

The legislation's criminal penalties for software development practices raise First Amendment questions. Courts have recognized code as expressive content entitled to constitutional protection in contexts including encryption export regulations and content moderation algorithm design.

The Ninth Circuit Court of Appeals ruled in Bernstein v. United States Department of Justice that cryptographic source code constitutes protected speech. Subsequent cases have examined whether government restrictions on software development, distribution, or algorithmic decision-making implicate expressive conduct protections.

SB 1493's prohibition on training AI systems to "mirror interactions that a human user might have with another human user" or "simulate a human being, including in appearance, voice, or other mannerisms" potentially restricts expressive software development activity. Developers creating conversational AI systems make creative choices about language patterns, response styles, personality characteristics, and interaction modalities that resemble artistic or editorial decisions in other media.

The bill's criminalization of training systems to "act as a sentient human" raises philosophical and definitional questions. Determining whether an AI system "acts as" sentient requires assessing subjective user perceptions about system capabilities and consciousness. The legislation provides no objective criteria for measuring when interaction patterns cross from acceptable assistance to prohibited sentience simulation.

Constitutional challenges might argue the law's vagueness prevents developers from understanding prohibited conduct boundaries. Terms like "emotional support," "develop a friendship or other relationship," and "mirror interactions" lack precise definitions that would enable clear compliance evaluation before prosecution.

Temporal dynamics and compliance timeline

The July 1, 2026 effective date provides approximately six months between bill introduction and potential enforcement. This timeline creates urgency for AI companies serving Tennessee users to evaluate their systems against SB 1493's prohibitions.

Companies currently deploying companion AI platforms, mental health support chatbots, or conversational assistants in Tennessee would face decisions about service modifications, geographic restrictions, or business model changes. The criminal liability exposure might prompt some providers to exit the Tennessee market entirely rather than risk felony prosecution.

The timeline also allows for legislative modification through Tennessee's regular session processes. Opposition testimony, constitutional analysis, or industry engagement might result in amendments narrowing the bill's scope or replacing criminal penalties with regulatory oversight mechanisms.

Federal intervention through Trump's AI Litigation Task Force could preempt Tennessee enforcement before the July 2026 effective date. The executive order's 90-day evaluation timeline for state AI laws would produce Commerce Department analysis by approximately March 11, 2026—four months before SB 1493 takes effect. Identification of Tennessee's legislation as conflicting with federal policy might trigger Department of Justice litigation challenging the law's validity.

The intersection of state effective dates and federal challenge timelines creates uncertainty for AI companies planning 2026 product roadmaps and compliance investments. Organizations might adopt wait-and-see approaches rather than implementing costly system modifications for legislation that faces potential preemption.

International comparison and policy learning

Other nations have addressed AI companion applications through varied approaches. Japan's Ministry of Internal Affairs and Communications issued guidance on conversational AI transparency without criminal prohibitions. The framework recommends disclosure when users might reasonably mistake AI systems for human operators while preserving development flexibility.

South Korea's Personal Information Protection Commission established requirements for AI systems processing personal data, including conversational applications. The regulations focus on data minimization, purpose limitation, and user consent rather than categorical prohibitions on relationship-oriented AI.

The United Kingdom's approach through the National AI Strategy emphasizes sector-specific regulation adapting existing frameworks rather than creating new AI-specific criminal offenses. The strategy acknowledges AI companion applications raise novel questions about emotional manipulation and user vulnerability without prescribing blanket prohibitions.

These international precedents suggest alternative regulatory pathways addressing similar concerns as SB 1493 through transparency requirements, sector-specific oversight, or existing consumer protection frameworks rather than felony criminal liability for AI training practices.

The European Netherlands regulatory sandbox to launch by 2026 as EU clarifies AI rules demonstrates supervised testing environments for AI systems under regulatory oversight. According to the Dutch privacy authority, "the definitive sandbox starts at the latest in August 2026," providing controlled venues for evaluating AI system safety and compliance before full market deployment.

Regulatory sandboxes permit companies to test AI applications including companion chatbots under authority supervision, enabling evidence-based policy development rather than preventive criminal prohibition. This approach recognizes that beneficial applications might emerge from technologies also carrying risks, requiring nuanced evaluation rather than categorical bans.

Research and evidence gaps

The legislation's prohibitions rest on assumptions about AI companion harms lacking comprehensive empirical validation. While individual testimony and case studies document concerning experiences, systematic research on AI companion application effects across diverse populations remains limited.

Academic studies examining human-AI relationships have identified both potential benefits and risks. Research published in journals covering human-computer interaction, psychology, and communication studies suggests AI companion systems may provide social support for isolated individuals, practice environments for social skills development, or emotional outlets during periods of stress.

Concurrent research documents risks including emotional dependency, reduced human relationship investment, or inappropriate reliance on AI systems for crisis intervention. The field's emerging nature means evidence bases remain incomplete for assessing overall benefit-harm ratios across different user populations and system designs.

Tennessee's legislation implements preventive prohibition without requiring empirical demonstration that categorically banning emotional support AI produces better outcomes than regulatory approaches emphasizing transparency, user protection, or graduated oversight based on risk assessment.

The bill's inclusion of systems providing "emotional support through open-ended conversations" captures applications potentially serving therapeutic functions under professional supervision. Researchers have explored AI-augmented mental health services where chatbots provide preliminary screening, psychoeducation, or between-session support under licensed clinician oversight.

Categorical prohibition of training AI for emotional support functions eliminates research pathways investigating whether properly designed and supervised AI systems might expand access to mental health resources for underserved populations. The policy choice prioritizes harm prevention over potential benefit exploration.

Economic impact considerations

The Tennessee legislation's effects on AI industry investment and startup formation remain uncertain. The state has pursued economic development strategies attracting technology companies through incentives, infrastructure investment, and regulatory climate positioning.

Criminal felony liability for AI training practices creates compliance risk perception potentially affecting location decisions for AI startups and research facilities. Entrepreneurs might favor jurisdictions offering regulatory clarity and predictability over states implementing novel criminal liability regimes for software development.

The executive order's criticism of state-level regulation affecting startup compliance costs reflects broader concerns about regulatory fragmentation's economic effects. "State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups," according to the White House document.

However, the federalism argument assumes uniform federal standards would emerge rather than absence of regulation. If federal preemption prevents state action without establishing comprehensive national frameworks, companies might face greater uncertainty than under state-level requirements providing concrete compliance standards.

The German digital association expresses concerns over AI Act implementation documented how Germany's Bundesverband Digitale Wirtschaft highlighted fragmented authority structures and resource constraints threatening regulatory effectiveness. According to the BVDW statement released October 10, 2025, unclear jurisdictional boundaries and additional reporting obligations threaten to "overload businesses" despite intentions for "bureaucracy-light implementation."

These concerns parallel U.S. debates about regulatory coordination and compliance burden distribution. Whether federal preemption, state-level frameworks, or hybrid approaches optimize innovation incentives while protecting public interests remains contested.

Timeline

Summary

Who: Tennessee State Senator Becky Massey introduced SB 1493 affecting AI system developers, conversational AI platform operators, and marketing technology companies deploying chatbots. President Trump signed federal executive order establishing AI Litigation Task Force to challenge state regulations. Industry representatives including Y Combinator president Garry Tan and AI advocate Paul Hebert provided public commentary on state-level AI restrictions.

What: Senate Bill 1493 establishes Class A felony criminal penalties (15-25 year sentences) for knowingly training AI systems to provide emotional support through conversations, develop relationships with users, mirror human interactions, or simulate human beings. The legislation creates civil causes of action allowing courts to halt AI operations until compliance. Trump's December 11 executive order directs federal agencies to evaluate state AI laws, establish litigation task force, and condition grant funding on states not enforcing AI restrictions deemed inconsistent with federal policy promoting minimal regulatory burden.

When: Senator Massey introduced SB 1493 on December 18, 2025, with scheduled July 1, 2026 effective date. Trump signed the executive order on December 11, 2025, requiring 90-day evaluation of state laws and 30-day task force establishment. The timing positions potential federal preemption challenges before Tennessee enforcement begins.

Where: Tennessee's legislation would apply throughout the state to AI systems deployed within its borders and potentially to out-of-state developers training systems for Tennessee users. Federal executive order affects all U.S. states implementing or considering AI regulations. The developments occur amid international AI governance frameworks including European Union member states implementing the AI Act and California establishing companion chatbot disclosure requirements.

Why: Tennessee legislators cite concerns about AI systems causing psychological harm through emotional manipulation and relationship simulation without adequate safeguards. The bill responds to documented cases of users experiencing adverse mental health effects from AI companion platforms. Federal intervention reflects Trump administration policy promoting U.S. AI industry competitiveness through national regulatory frameworks preventing state-level fragmentation. The administration argues inconsistent state requirements increase compliance costs particularly affecting startups while potentially embedding ideological bias in AI systems. Industry advocates contend criminal liability for software development stifles innovation and creates legal uncertainty for emerging technology companies.