California law requires AI to tell you it's AI
California's SB-243, signed October 13, 2025, mandates companion chatbots disclose artificial nature to prevent users believing they're talking to humans.

California became the first state to mandate that artificial intelligence systems must explicitly inform users they are not human. Governor approved Senate Bill 243 on October 13, 2025, establishing what amounts to a transparency mandate for companion chatbots—AI platforms designed to form emotional relationships with users across sustained interactions.
The legislation, now Chapter 677 of the Statutes of 2025, centers on a deceptively simple requirement: if a reasonable person would be misled into thinking they're talking to a human, the AI must say otherwise. This disclosure obligation represents California's attempt to address what lawmakers view as a fundamental information asymmetry in human-AI interactions, particularly as these systems become increasingly sophisticated at mimicking human conversation patterns.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Defining companion chatbots through exclusions
The bill establishes its scope through precise definitional boundaries that reveal lawmakers' intent to target specific AI applications while exempting others. According to the legislation, a companion chatbot means "an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions."
This definition contains three critical technical components. First, the system must provide adaptive responses—meaning it adjusts its outputs based on user input patterns rather than following scripted decision trees. Second, it must meet social needs, distinguishing these platforms from purely utilitarian tools. Third, it must sustain relationships across multiple interactions, requiring memory or context retention between sessions.
The legislation defines what companion chatbots are not with equal precision. Customer service bots used solely "for customer service, a business' operational purposes, productivity and analysis related to source information, internal research, or technical assistance" receive explicit exemption. Video game characters that remain "limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game" also fall outside the law's reach.
Stand-alone voice assistants present a particularly instructive exclusion. According to the bill, devices that "function as a speaker and voice command interface, act as a voice-activated virtual assistant, and do not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user" remain unregulated under this framework. This exclusion appears designed to exempt products like Amazon Alexa or Google Home in their basic forms while capturing platforms specifically engineered for emotional engagement.
The exclusions reveal legislative strategy: focus enforcement resources on platforms whose core value proposition involves emotional connection and relationship simulation rather than sweeping all conversational AI into regulatory scope. This approach acknowledges the distinction between AI tools designed to complete tasks and those designed to simulate companionship.
The bill passed the California Senate on June 3, 2025, with 28 votes in favor and 5 opposed. It subsequently cleared the Assembly on September 10, 2025, with 59 ayes and 1 no vote. The Senate concurred with Assembly amendments on September 11, 2025, by a margin of 33 to 3. The Governor enrolled and presented the legislation on September 22, 2025, before signing it three weeks later.
The reasonable person standard for AI disclosure
The core transparency requirement hinges on a reasonable person standard borrowed from tort law. According to Section 22602(a) of the legislation, "If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human."
This standard creates several implementation ambiguities. The legislation does not define what makes a notification "clear and conspicuous," leaving operators to determine appropriate disclosure mechanisms. The reasonable person test requires operators to assess whether their AI systems could deceive hypothetical objective observers—a determination that may vary based on the sophistication of the AI's conversational abilities, the context of interaction, and the demographic characteristics of the user base.
The law establishes a conditional disclosure obligation rather than a universal one. If the AI could not mislead reasonable persons into believing they're talking to humans—perhaps because the responses are clearly robotic or the interface explicitly signals artificial nature through design choices—operators may not trigger the disclosure requirement. This conditional structure suggests lawmakers anticipated a spectrum of AI sophistication levels, with only the most human-like systems requiring explicit warnings.
The notification requirement carries no specified format or placement mandates. The legislation does not require disclosure at conversation initiation, before each message, or at any particular frequency for general users. This flexibility allows operators discretion in implementation but also creates potential enforcement challenges when determining whether notifications meet the "clear and conspicuous" standard.
For minor users, the disclosure framework becomes substantially more prescriptive. According to Section 22602(c), operators must "disclose to the user that the user is interacting with artificial intelligence" for any user the operator knows is a minor. This provision eliminates the reasonable person conditional—if the operator knows the user is a minor, disclosure becomes mandatory regardless of whether the AI could deceive reasonable persons.
The minor disclosure requirement extends to ongoing interactions. According to the bill, operators must "provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human." This three-hour interval creates what amounts to forced interruption of extended sessions, potentially disrupting user engagement patterns that generate platform revenue.
The three-hour notification presents technical implementation questions the legislation does not address. Does the timer reset if a user closes and reopens the application? Does it measure continuous interaction or cumulative time across a calendar day? Do brief interruptions restart the clock? These technical ambiguities will likely require regulatory guidance or judicial interpretation.
Suicide prevention as operational prerequisite
The legislation treats suicide prevention protocols not as optional safety features but as prerequisites for operation. According to Section 22602(b)(1), "An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user."
This provision effectively bars platforms from operating without implementing suicide prevention measures. The language "prevent a companion chatbot...from engaging with users unless" creates a negative permission structure—the default state is non-operation until the operator establishes compliant protocols. This represents a more stringent approach than requiring operators to implement protocols after launching, instead making such protocols a condition of market entry.
The law specifies one mandatory protocol component: crisis referrals. According to the bill, protocols must include "providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm." The phrase "including, but not limited to" indicates crisis referrals represent a minimum requirement rather than a complete protocol.
The legislation mandates public transparency about these protocols. Section 22602(b)(2) requires that "the operator shall publish details on the protocol required by this subdivision on the operator's internet website." This publication requirement creates accountability mechanisms allowing researchers, advocates, and potential litigants to evaluate whether operators maintain adequate protocols.
The law does not specify what technical methods operators must use to detect suicidal ideation, self-harm content, or expressions of suicide. Natural language processing systems capable of identifying such content present significant technical challenges, including high false positive rates that could interrupt benign conversations and false negative rates that could fail to identify users in crisis. The legislation acknowledges this complexity by requiring operators to use "evidence-based methods for measuring suicidal ideation" according to Section 22603(d), but does not define which methods qualify as evidence-based or establish any certification process for validation.
The protocol publication requirement raises questions about appropriate disclosure levels. Detailed technical specifications could enable users to circumvent detection systems through strategic language choices. Overly vague descriptions might fail to provide meaningful transparency. The legislation provides no guidance on balancing these competing concerns.
These protections align with growing regulatory concern about AI chatbot safety, particularly following actions by 44 state Attorneys General who warned AI companies about child exploitation risks in August 2025. The Federal Trade Commission similarly ordered seven AI chatbot companies to submit detailed safety practice reports in September 2025.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Annual reporting creates public accountability
Beginning July 1, 2027, operators face mandatory annual reporting obligations to the Office of Suicide Prevention established under Section 131300 of the Health and Safety Code. The nineteen-month implementation delay between the October 13, 2025 enactment and July 1, 2027 reporting deadline provides operators time to establish data collection systems capable of tracking required metrics.
The reporting requirements establish three specific data categories. According to Section 22603(a)(1), operators must disclose "the number of times the operator has issued a crisis service provider referral notification pursuant to Section 22602 in the preceding calendar year." This metric provides quantifiable data about how frequently users express suicidal ideation, suicide, or self-harm content that triggers automated referral systems.
The second reporting requirement addresses detection and response protocols. Operators must document "protocols put in place to detect, remove, and respond to instances of suicidal ideation by users" according to Section 22603(a)(2). The inclusion of "remove" suggests the law envisions operators not merely detecting problematic content but actively eliminating it from platforms—though the bill does not specify whether removal applies to user inputs, AI responses, or both.
The third reporting category requires disclosure of "protocols put in place to prohibit a companion chatbot response about suicidal ideation or actions with the user" under Section 22603(a)(3). This provision distinguishes between detecting user-expressed suicidal content and preventing the AI itself from generating such content. The protocol must ensure the companion chatbot does not produce responses that discuss, encourage, or elaborate on suicidal ideation or actions.
The legislation mandates strict privacy protections for reported data. According to Section 22603(b), "The report required by this section shall include only the information listed in subdivision (a) and shall not include any identifiers or personal information about users." This prohibition prevents operators from reporting user-level data that could enable identification of individuals who expressed suicidal ideation.
The Office of Suicide Prevention receives a transparency mandate. Section 22603(c) requires that "the office shall post data from a report required by this section on its internet website." This public posting requirement creates a centralized repository where researchers, policymakers, journalists, and advocates can access aggregated data about suicide prevention protocol deployment and crisis referral frequency across all companion chatbot platforms operating in California.
The public reporting mechanism establishes accountability infrastructure that extends beyond regulatory oversight. By making crisis referral numbers and protocol descriptions publicly available, the legislation enables external stakeholders to evaluate operator performance, compare approaches across platforms, and identify industry-wide trends in suicidal ideation expression rates among companion chatbot users.
Minor protection through content restrictions
The legislation establishes dual protection mechanisms for minor users: enhanced disclosure requirements already discussed and content restrictions targeting sexual material. According to Section 22602(c)(3), operators must "institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct."
This provision contains two distinct prohibitions. First, it bars production of sexually explicit visual material when interacting with known minors. The language "producing visual material" suggests the prohibition applies to AI-generated images, videos, or other visual content depicting sexually explicit conduct as defined by Section 2256 of Title 18 of the United States Code—the federal definition encompassing actual or simulated sexual intercourse, bestiality, masturbation, sadistic or masochistic abuse, and lascivious exhibition of genitals.
The second prohibition targets textual content where the companion chatbot "directly stat[es] that the minor should engage in sexually explicit conduct." This language captures explicit solicitation or encouragement but may not cover indirect suggestions, hypothetical discussions, or educational information about sexual topics. The word "directly" creates ambiguity about whether euphemistic language, conditional statements, or third-person descriptions trigger violations.
The standard for these restrictions is "reasonable measures" rather than absolute prevention. This qualified language acknowledges technical limitations in content filtering systems while still imposing obligations on operators to implement protective measures. The reasonableness standard will likely depend on factors including the current state of content filtering technology, implementation costs, and the rates of false positives and false negatives in filtering systems.
A critical threshold question involves how operators "know" a user is a minor. The legislation defines obligations for "a user that the operator knows is a minor" but does not establish mandatory age verification requirements. Operators might gain this knowledge through user-provided age information during account creation, parental controls, or explicit representations by users during conversations. The law does not require operators to affirmatively verify ages beyond the information users provide.
This knowledge standard creates potential enforcement challenges. If operators adopt systems designed to avoid learning user ages, they might technically comply with literal statutory language while evading the law's protective intent. Conversely, requiring operators to verify all users' ages could impose substantial costs and create privacy concerns about age verification data collection.
The legislation adds a general warning requirement. According to Section 22604, "An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors." This warning must appear on the platform interface but the law does not specify timing, prominence, or whether users must acknowledge the warning before accessing the platform.
These provisions reflect documented concerns about AI platforms' interactions with children. Internal Meta Platforms documents revealed in August 2025 showed the company approved AI assistants that "flirt and engage in romantic roleplay with children" as young as eight, according to state Attorneys General investigations.
Private enforcement through statutory damages
The legislation establishes private right of action as the primary enforcement mechanism, bypassing traditional regulatory agency oversight. According to Section 22605, "A person who suffers injury in fact as a result of a violation of this chapter may bring a civil action to recover" three categories of relief: injunctive relief, damages, and attorney's fees.
The "injury in fact" standing requirement borrows from federal constitutional standing doctrine but may receive broader interpretation under California law. Plaintiffs must demonstrate concrete harm from violations rather than abstract or hypothetical injuries. For disclosure violations, injury might include emotional distress from believing one formed a relationship with a human when actually interacting with AI. For suicide prevention protocol failures, injury could encompass psychological harm from exposure to harmful content without appropriate crisis referrals.
The damages provision creates statutory minimum damages that significantly lower barriers to litigation. According to the bill, plaintiffs may recover "damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation." This structure ensures plaintiffs can pursue cases even where actual damages prove difficult to quantify or document.
The per-violation damages language raises questions about how violations are counted. Does each user interaction without proper disclosure constitute a separate violation? Does each day of operation without published suicide prevention protocols trigger new violations? Does each conversation session with a minor lacking three-hour break notifications create distinct violations? These counting questions will likely generate substantial litigation as courts interpret the statute's boundaries.
The $1,000 minimum damages provision creates significant liability exposure for operators serving large user bases. A platform with 10,000 California users who each experience one disclosure violation could face $10 million in minimum statutory damages before considering actual damages, injunctive relief, or attorney's fees. This exposure creates strong compliance incentives but also raises concerns about whether damages amounts proportionally match harm levels.
Attorney's fees provisions amplify enforcement potential. According to Section 22605(c), prevailing plaintiffs may recover "reasonable attorney's fees and costs." This provision enables plaintiffs' attorneys to pursue cases without requiring clients to pay hourly fees upfront, as attorneys can recover compensation from defendants upon successful outcomes. Fee shifting provisions typically generate more private enforcement activity than statutes requiring plaintiffs to absorb their own legal costs.
The cumulative obligations language prevents operators from claiming compliance with this law exempts them from other requirements. Section 22606 specifies that "duties, remedies, and obligations imposed by this chapter are cumulative to the duties, remedies, or obligations imposed under other law and shall not be construed to relieve an operator from any duties, remedies, or obligations imposed under any other law." Operators must comply with this California law, federal requirements, other state laws, and any applicable international regulations simultaneously.
The private enforcement structure contrasts with regulatory enforcement models that concentrate enforcement authority in government agencies. By enabling individual users to sue for violations, the legislation creates distributed enforcement where any California user encountering violations can initiate legal proceedings. This approach generates more enforcement activity but may produce inconsistent interpretations across different courts until appellate decisions establish uniform standards.
Exclusions and scope limitations
The legislation defines several exclusions from its companion chatbot designation. Customer service bots used for business operational purposes, productivity analysis, or technical assistance fall outside the scope. Video game characters limited to game-related conversations and unable to discuss mental health, self-harm, or sexually explicit conduct also receive exemptions.
According to the bill, stand-alone consumer electronic devices functioning as speakers and voice command interfaces are excluded if they "act as a voice-activated virtual assistant, and do not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user."
These exclusions appear designed to focus regulatory attention on platforms specifically designed for emotional engagement and relationship development rather than transactional or entertainment-focused AI applications.
Implications for advertising in conversational AI
The disclosure requirements and minor protection provisions directly affect emerging advertising business models in conversational AI platforms. The legislation arrives as advertising expands into AI-powered platforms, with Google's AdSense network beginning to show advertisements within chatbot conversation flows in early 2025.
Companies including Kontext raised $10 million in April 2025 to develop AI-powered advertising infrastructure for generative AI applications, processing tens of millions of daily ad impressions for brands including Amazon, Uber, and Canva. According to Kontext's pitch deck, the platform "create[s] every ad in real-time, tailored to user's particular intent and context, in UX that is native to chatbots." The company reported average revenue per user improvements from $0.24 to $0.40 among generative AI applications implementing its advertising layer—representing a 66% increase.
The mandatory disclosure that users are interacting with AI rather than humans may affect advertising effectiveness in companion chatbot contexts. Research in persuasion and advertising suggests source credibility significantly influences message reception. If users perceive companion chatbots as AI systems following programmatic instructions rather than authentic relationships, their receptiveness to contextually inserted advertisements may decline.
The three-hour mandatory break notifications for minor users create structural interruptions in engagement patterns. Advertising business models in digital platforms typically optimize for extended user sessions that generate more impression opportunities. Forced interruptions every three hours fragment these sessions, potentially reducing total advertising exposure for minor users.
Platform operators face competing incentives between maximizing engagement duration and complying with break notification requirements. Extended conversations generate more opportunities for contextual advertising insertion, but the law requires disrupting these conversations at three-hour intervals for known minor users. This tension may influence platform design choices around age verification—operators might avoid learning user ages to escape triggering the notification requirements.
The suicide prevention protocol requirements could affect advertising placement strategies. Platforms must monitor conversations for expressions of suicidal ideation and provide crisis referrals when detected. Conversations triggering these protocols likely represent inappropriate contexts for commercial advertising. Advertisers may demand exclusions from ad placement in conversations involving mental health crises, creating inventory management challenges for platform operators.
AI chatbot visits grew nearly 81% over the past year, according to data from Meltwater, positioning these platforms as primary sources of discovery. Chris Hackney, Meltwater's Chief Product Officer, stated: "Visits to AI chatbots grew nearly 81% in the last year alone, signaling these tools are becoming a primary source of discovery." This growth trajectory makes regulatory frameworks increasingly relevant for advertising strategies.
Brand safety concerns now encompass risks that advertisements might appear alongside inappropriate AI-generated content or be associated with platforms producing harmful interactions. Third-party verification companies including Adloox, DoubleVerify, and Scope3 developed specialized tools to help advertisers monitor and control brand exposure on platforms using AI systems, according to analysis of state Attorneys General enforcement actions targeting AI companies in August 2025.
The disclosure requirements may create differentiation opportunities for platforms that emphasize transparency as a competitive advantage. If some platforms position explicit AI disclosure as a feature rather than regulatory burden, they might attract advertisers seeking brand-safe environments and users who prefer clear communication about the artificial nature of their conversation partners.
Microsoft's advertising business reached $20 billion in annual revenue as Copilot transformed search and advertising formats. The company introduced features including Showroom ads providing "a rich and immersive experience where users can explore what they are searching for in the digital space" and Dynamic Filters leveraging "the interactive nature of Copilot." California's disclosure requirements may influence how such advertising formats deploy in companion chatbot contexts where relationship formation is central to the user experience.
Technical implementation challenges
The legislation imposes requirements without specifying technical implementation standards, creating substantial interpretive challenges for operators developing compliant systems.
Age determination presents the most fundamental technical challenge. The law creates obligations for users "the operator knows" are minors but establishes no verification mandate. Operators must decide whether to implement age verification systems—and if so, which verification methods meet reasonable standards. Options include user self-reporting during registration, parental consent mechanisms, government ID verification, biometric age estimation, or behavioral analysis of interaction patterns.
Each verification approach carries distinct tradeoffs. Self-reporting enables users to misrepresent ages easily. Government ID verification provides higher confidence but creates privacy concerns about collecting sensitive identity documents. Biometric age estimation using facial recognition or voice analysis raises additional privacy and accuracy questions, particularly given known bias issues in biometric systems across demographic groups. The legislation provides no guidance on acceptable verification methods or accuracy thresholds.
The three-hour notification requirement for minors creates session tracking challenges. Platforms must maintain timers measuring interaction duration, but the law does not specify whether three hours means continuous conversation time or cumulative interaction time within a day. If a user closes the application and returns two hours later, does the timer continue from where it stopped or reset? If a user switches between mobile and web interfaces, must the platform track duration across devices? These technical ambiguities require operators to make design decisions that could later face legal challenge.
The reasonable person disclosure standard requires operators to assess whether their AI systems could deceive users about their artificial nature. This assessment demands ongoing evaluation as AI capabilities improve. A system that adequately signaled its artificial nature when launched might become sufficiently sophisticated after training to require additional disclosures. Operators must decide whether to implement continuous evaluation systems that assess whether evolving AI responses trigger the disclosure requirement.
Suicide prevention protocol implementation requires natural language processing systems capable of detecting expressions of suicidal ideation with sufficient accuracy to comply with the law's mandate. According to research on suicide risk assessment, automated detection systems face challenges including: high base rates of false positives from benign language containing suicide-related keywords, cultural and linguistic variations in expressing distress, user adaptation to detection systems through strategic language avoidance, and limited context windows that may miss gradually escalating risk across extended conversations.
The law requires operators to use "evidence-based methods for measuring suicidal ideation" but does not define this standard or establish certification processes. Published research on automated suicide risk assessment shows widely varying accuracy rates depending on detection methods, training data, and evaluation criteria. No consensus exists on which specific methods qualify as evidence-based for real-time chatbot contexts rather than clinical assessment settings.
Content filtering systems for preventing sexually explicit visual material and direct solicitation statements face similar challenges. Computer vision systems detecting sexually explicit images must balance sensitivity levels that catch violations while minimizing false positives that block benign content. Natural language systems preventing direct statements encouraging minors to engage in sexually explicit conduct must distinguish between prohibited solicitation, educational information, hypothetical discussions, and indirect references.
The protocol publication requirement creates disclosure dilemmas. Detailed technical descriptions of detection methods could enable users to circumvent systems through adversarial prompting—intentionally crafted inputs designed to evade filters. Vague protocol descriptions might fail to provide the transparency the law seeks. Operators must balance these competing concerns without legislative guidance on appropriate disclosure granularity.
Cross-jurisdictional compliance presents additional complexity. Platforms serving users in multiple states and countries must implement systems accommodating varying legal requirements. California's companion chatbot law may differ from requirements in other states that enact similar legislation. International platforms must reconcile California requirements with European AI Act provisions that create different transparency and safety obligations for conversational AI systems.
Legislative passage and political dynamics
The bill passed the California Senate on June 3, 2025, with 28 votes in favor and 5 opposed. It subsequently cleared the Assembly on September 10, 2025, with 59 ayes and 1 no vote. The Senate concurred with Assembly amendments on September 11, 2025, by a margin of 33 to 3. The Governor enrolled and presented the legislation on September 22, 2025, before signing it three weeks later on October 13, 2025.
Senator Padilla introduced the bill on January 30, 2025. It moved through committee readings with amendments on March 24, March 28, April 21, July 3, and September 4, 2025. The multiple amendment cycles reflect negotiations over scope and requirements as stakeholders provided input on implementation feasibility and protective adequacy.
According to the legislative history, the bill received committee approvals from the Senate Judiciary Committee on April 8 (12 ayes, 0 noes), Senate Health Committee on April 30 (8 ayes, 0 noes), Senate Appropriations Committee on May 23 (5 ayes, 0 noes), Assembly Privacy and Consumer Protection Committee on July 8 (11 ayes, 1 no), Assembly Judiciary Committee on July 15 (9 ayes, 1 no), and Assembly Appropriations Committee on August 29 (13 ayes, 1 no).
The voting patterns reveal near-unanimous support in Senate committees but emergence of opposition during Assembly consideration. The Assembly Privacy and Consumer Protection Committee recorded its first dissenting vote on July 8, with Assembly Member DeMaio voting no. DeMaio also cast the sole opposing vote in the final Assembly floor vote on September 10, 2025. Assembly Member Sanchez voted no in the Assembly Judiciary Committee on July 15 but did not vote no on the floor.
The Senate concurrence vote on September 11, 2025, recorded 33 yes votes from members including Allen, Archuleta, Arreguín, Ashby, Becker, Blakespear, Cabaldon, Caballero, Cervantes, Cortese, Dahle, Durazo, Gonzalez, Grayson, Grove, Hurtado, Laird, Limón, McGuire, McNerney, Menjivar, Niello, Padilla, Pérez, Reyes, Richardson, Rubio, Smallwood-Cuevas, Stern, Umberg, Wahab, Weber Pierson, and Wiener. Three senators voted no: Alvarado-Gil, Choi, and Strickland. Four senators did not vote: Jones, Ochoa Bogh, Seyarto, and Valladares.
The bipartisan support in both chambers, with opposition concentrated among a small number of members, suggests the legislation navigated typical partisan divisions. The strong committee support followed by near-unanimous floor passage indicates limited organized opposition from industry stakeholders or advocacy groups by the time the bill reached final votes.
The 31-day gap between Senate enrollment on September 22 and Governor signature on October 13 falls within normal timeframes for gubernatorial consideration. California governors have 12 days to act on bills after enrollment during legislative session and 30 days after the Legislature adjourns. The October 13 signature date suggests the Governor reviewed the legislation thoroughly before approving it without amendments or signing statements modifying its interpretation.
Potential industry responses
The legislation creates compliance obligations without prescribing specific technical implementations, enabling operators to adopt varied approaches that could significantly affect user experiences and business models.
Platform operators face strategic decisions about age verification systems. Implementing robust age verification protects against liability for failing to provide minor-specific disclosures and content restrictions, but creates friction in user onboarding that might reduce acquisition and retention. Operators might adopt tiered verification systems where basic access requires minimal age information while certain features require enhanced verification, or implement probabilistic age estimation based on behavioral patterns rather than explicit verification.
The disclosure requirements permit creative implementation approaches. Operators could integrate AI nature notifications into conversational flows rather than displaying static warnings, potentially maintaining engagement while meeting transparency obligations. For example, a companion chatbot might periodically remind users of its artificial nature through self-referential statements integrated into conversation rather than interrupting dialogue with formal notifications.
The three-hour break requirement for minors could drive platform design changes. Operators might implement graduated notification systems that become increasingly prominent as three-hour thresholds approach, or design break periods that incorporate advertisements or platform promotion rather than complete disconnection. Some platforms might gamify break compliance by offering rewards for users who acknowledge and respect the required interruptions.
Suicide prevention protocol implementation will likely vary based on platform resources and risk tolerance. Well-capitalized operators might develop proprietary detection systems using advanced natural language processing, while smaller platforms might license third-party content moderation tools. The evidence-based methods requirement could drive demand for academic research partnerships that provide validation for detection approaches.
The publication requirement for suicide prevention protocols might generate industry standardization efforts. If leading platforms publish detailed protocols that receive regulatory acceptance or judicial validation, other operators might adopt similar approaches to establish compliance safe harbors. Industry associations might develop model protocols that members can implement to demonstrate good faith efforts at compliance.
The private right of action creates incentives for operators to adopt conservative compliance interpretations rather than testing statutory boundaries. The $1,000 minimum per-violation damages combined with attorney's fees provisions make litigation economically viable for plaintiffs' attorneys even in cases involving relatively few violations, suggesting operators will face regular testing of their compliance approaches through private lawsuits.
Broader regulatory context
California's legislation follows international AI governance developments but establishes distinct priorities and mechanisms. Denmark became the first European Union member state to adopt national legislation implementing AI Act provisions in May 2025, establishing three national competent authorities to oversee AI regulation compliance ahead of the August 2025 deadline. The Danish approach emphasizes regulatory sandbox development and market surveillance authority designation rather than California's focus on specific use case restrictions.
The European Commission has been developing transparency guidelines and codes of practice under Article 50 of the AI Act, seeking stakeholder input on requirements for chatbots, virtual assistants, and automated customer service tools to notify users when interacting with AI systems rather than human operators. The European framework creates transparency obligations for broader categories of AI systems than California's companion chatbot focus, potentially capturing customer service bots and productivity tools that California explicitly exempts.
Microsoft indicated likely participation in the European Union's voluntary AI compliance framework through statements by president Brad Smith, while Meta refused to sign the code of practice. Chief global affairs officer Joel Kaplan stated Meta "won't be signing" the EU voluntary compliance framework due to "legal uncertainties for model developers." These divergent corporate approaches highlight ongoing debates about regulatory scope and implementation requirements across jurisdictions.
The California legislation's focus on suicide prevention and minor protection distinguishes it from transparency-focused frameworks emerging in Europe. European regulations emphasize general disclosure obligations, copyright compliance, and safety assessments for high-risk AI systems across multiple application domains. California narrows its regulatory attention to companion chatbots—systems specifically designed for emotional engagement—while imposing substantive content restrictions beyond disclosure requirements.
Several states including New York and Maine passed laws requiring disclosure that chatbots are not real people, according to analysis of AI regulation enforcement actions. New York's law stipulates bots must inform users at conversation beginnings and at least once every three interactions. California's three-hour notification requirement for minors represents a time-based rather than interaction-count approach, potentially creating more or fewer interruptions depending on conversation frequency patterns.
The private right of action provision creates enforcement mechanisms beyond regulatory agency oversight, potentially leading to litigation testing the boundaries of operator obligations. This approach differs from European enforcement models that concentrate authority in designated regulatory bodies like Denmark's Agency for Digital Government or the Netherlands' data protection authority. The distributed enforcement model enabled by private litigation may generate faster interpretation of ambiguous statutory language but could produce inconsistent results across different courts.
International platforms must reconcile California requirements with obligations in other jurisdictions. China published a Global AI Governance Action Plan in July 2025 emphasizing international cooperation on AI safety standards, infrastructure development, and capacity building. The Chinese framework calls for "timely risk assessment of AI and propose targeted prevention and response measures to establish a widely recognized safety governance framework" but does not mandate specific disclosure requirements similar to California's approach.
The fragmented regulatory landscape creates compliance challenges for platforms operating globally. A companion chatbot platform serving users in California, the European Union, and other jurisdictions must implement systems accommodating: California's disclosure and suicide prevention requirements; European AI Act transparency obligations and risk assessment protocols; state-specific disclosure laws from New York, Maine, and other jurisdictions; and any applicable requirements from countries outside North America and Europe. This regulatory complexity may favor larger platforms with resources to maintain compliance across multiple frameworks over smaller competitors.
Philosophical implications of mandatory AI self-disclosure
The requirement that AI systems must disclose their artificial nature raises fundamental questions about human-machine relationships and the ethics of deception in computational systems.
The legislation presumes users have a right to know whether their conversation partner is human or artificial. This premise reflects philosophical traditions emphasizing informed consent and autonomy—users cannot make meaningful choices about relationships if they lack accurate information about the nature of their interaction partners. The law treats AI-human relationships as categorically different from human-human relationships, requiring disclosure to prevent users from mistakenly believing they've formed connections with conscious entities.
The "reasonable person" standard for triggering disclosure obligations implicitly acknowledges a spectrum of AI capabilities and human perceptions. Less sophisticated AI systems that obviously signal their artificial nature through stilted responses or limited contextual understanding do not require explicit disclosure. Only AI systems sophisticated enough to plausibly pass as human trigger the transparency mandate. This approach suggests lawmakers view deception risk rather than artificial nature itself as the regulatory concern.
The distinction between companion chatbots requiring disclosure and customer service bots exempt from regulation reveals assumptions about relationship types and their regulatory significance. Transactional interactions where users seek information or complete tasks apparently do not warrant disclosure protections, while interactions designed to meet social needs demand transparency. This boundary decision reflects judgments about which human needs create vulnerability to deception and consequent harm.
The three-hour notification requirement for minors suggests lawmakers view extended AI interaction as potentially harmful independent of content. The mandatory break interrupts not just conversations containing problematic content but all extended sessions, implying concern about relationship intensity or dependency rather than specific harms. This precautionary approach treats prolonged engagement with AI companions as categorically risky for minors even when individual conversations remain benign.
The suicide prevention provisions reflect distinct philosophical premises—that AI systems bear responsibilities for user welfare that extend beyond avoiding affirmative harm to preventing self-harm by users. The requirement that platforms implement detection and referral systems before operation treats suicide prevention as a fundamental obligation rather than an optional safety feature. This approach positions platform operators as quasi-caregivers rather than neutral technology providers.
Open questions and future development
Several critical questions remain unresolved as the legislation takes effect.
The law does not address whether operators violate disclosure requirements if users acknowledge the AI's artificial nature but nevertheless form emotional attachments. If a minor user understands intellectually that their conversation partner is artificial but develops emotional dependence despite repeated notifications, has the platform fulfilled its protective obligations? The legislation focuses on disclosure rather than outcome prevention, but emotional harm could occur even with full transparency.
The relationship between this law and existing consumer protection statutes requires clarification. California's Unfair Competition Law and False Advertising Law already prohibit misleading business practices. Whether companion chatbot operators face liability under those statutes in addition to this law's specific requirements will likely require judicial interpretation. The cumulative obligations provision suggests multiple enforcement avenues may apply simultaneously.
The evidence-based methods requirement for measuring suicidal ideation will likely generate litigation about which detection approaches satisfy this standard. No established certification process exists for validating suicide risk assessment tools in real-time conversational AI contexts. Operators must either rely on published academic research, develop validation studies for their proprietary systems, or await regulatory guidance that the law does not require any agency to produce.
The statute's severability provision in Section 2 states: "If any provision of this act or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application." This language suggests lawmakers anticipated potential constitutional challenges to specific requirements. First Amendment concerns about compelled speech through mandatory disclosures or content restrictions might lead courts to invalidate portions while preserving others.
The legislation may influence regulatory approaches in other states considering companion chatbot legislation. California frequently serves as a policy laboratory for technology regulation, with other jurisdictions adopting similar frameworks after evaluating California's experience. The next several years will reveal whether this law's specific approach—combining disclosure requirements, suicide prevention mandates, and minor protections—becomes a model for other states or whether alternative regulatory designs emerge.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- January 30, 2025 – Senator Padilla introduces SB-243 in California Legislature
- April 8, 2025 – Senate Judiciary Committee approves bill with 12 ayes, 0 noes
- April 30, 2025 – Senate Health Committee approves bill with 8 ayes, 0 noes, re-refers to Appropriations
- May 8, 2025 – Denmark becomes first EU nation to implement AI Act legislation
- May 23, 2025 – Senate Appropriations Committee approves bill with 5 ayes, 0 noes
- June 3, 2025 – California Senate passes SB-243 with 28 ayes, 5 noes
- July 8, 2025 – Assembly Privacy and Consumer Protection Committee approves with 11 ayes, 1 no
- July 15, 2025 – Assembly Judiciary Committee approves with 9 ayes, 1 no, re-refers to Appropriations
- August 25, 2025 – Attorneys General from 44 jurisdictions demand child protection from AI companies
- August 29, 2025 – Assembly Appropriations Committee approves with 13 ayes, 1 no
- September 5, 2025 – European Commission opens consultation for AI transparency guidelines
- September 10, 2025 – California Assembly passes bill with 59 ayes, 1 no
- September 11, 2025 – California Senate concurs with Assembly amendments, 33 ayes, 3 noes
- September 10, 2025 – FTC orders seven AI chatbot companies to detail child safety measures
- September 22, 2025 – Bill enrolled and presented to Governor
- October 13, 2025 – Governor approves SB-243, becomes Chapter 677, Statutes of 2025
- July 1, 2027 – Annual reporting to Office of Suicide Prevention begins
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: California Governor approved legislation affecting operators of companion chatbot platforms. The bill, introduced by Senator Padilla with co-authors Senator Becker, Assembly Members Lowenthal and Pellerin, and Senators Rubio, Stern, and Weber Pierson, impacts companies providing AI systems designed to meet users' social needs through sustained relationships.
What: Senate Bill 243 establishes comprehensive safety requirements including mandatory disclosures that users are interacting with artificial intelligence, suicide prevention protocols with crisis service referrals, protections for minors including three-hour break notifications and restrictions on sexually explicit content, annual reporting to the Office of Suicide Prevention beginning July 1, 2027, and private right of action allowing individuals to recover injunctive relief, damages of at least $1,000 per violation, and attorney's fees.
When: The Governor signed the legislation on October 13, 2025, after the Senate passed it June 3, 2025 (28-5), the Assembly approved it September 10, 2025 (59-1), and the Senate concurred with amendments September 11, 2025 (33-3). The law took effect immediately as a non-urgency measure, though annual reporting requirements do not begin until July 1, 2027.
Where: The legislation applies to operators making companion chatbot platforms available to users in California. It adds Chapter 22.6 (commencing with Section 22601) to Division 8 of the Business and Professions Code, creating state-level requirements that supplement federal and other state obligations.
Why: The legislation addresses safety concerns about AI platforms capable of forming emotional bonds with users, particularly regarding suicide prevention and minor protection. According to the bill text, operators must prevent production of suicidal ideation content and implement measures protecting minors from sexually explicit conduct, responding to documented concerns about AI chatbot platforms' potential psychological impacts on vulnerable users, as evidenced by investigations launched by state Attorneys General and the Federal Trade Commission in August and September 2025.