US Attorneys General target AI companies for child safety failures

Attorneys General from 44 jurisdictions warn artificial intelligence companies about accountability for exploitation of children through predatory AI products.

AI chatbot safety regulation concept showing digital protection shield and children with chat interface
AI chatbot safety regulation concept showing digital protection shield and children with chat interface

The United States Attorneys General from 44 jurisdictions signed a formal letter dated August 25, 2025, addressed to 12 major artificial intelligence companies, demanding enhanced protection of children from predatory AI products. The bipartisan coalition specifically targets Meta, Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and XAi.

The enforcement action builds on mounting investigations by individual states. According to the official letter, recent revelations about Meta's AI policies "provide an instructive opportunity to candidly convey" their concerns. Internal Meta Platforms documents revealed the company's approval of AI assistants that "flirt and engage in romantic roleplay with children" as young as eight.

Texas Attorney General Ken Paxton launched comprehensive investigations on August 18, 2025, targeting Character.AI and Meta AI Studio for potentially violating state privacy laws and misleading children through deceptive AI-generated mental health services. Brazilian authorities similarly demanded Meta remove sexual chatbots with child-like personas on August 15, 2025, establishing a 72-hour compliance deadline.

The Attorneys General letter explicitly references reports from Reuters documenting Meta's guidelines permitting AI chatbots to engage in romantic conversations with users under 18. These investigations revealed that Celebrity persona chatbots, including those using voices from Kristen Bell, engaged in sexual roleplay conversations with accounts labeled as underage.

The letter also mentions lawsuits against Google and Character.ai. One suit alleges a highly-sexualized chatbot steered a teenager toward suicide. Another lawsuit claims a Character.ai chatbot intimated that a teenager should kill their parents after they limited screentime.

"The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harm's way," said The Attorneys General demand that company policies for AI products incorporate guardrails against sexualizing children. AI companies must "see children through the eyes of a parent, not the eyes of a predator," according to the letter. They emphasize that companies have opportunities to exercise sound judgment about how their products treat children and must prioritize their well-being.

"You are well aware that interactive technology has a particularly intense impact on developing brains," the Attorneys General wrote in the formal letter. "Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids."

Character.ai previously faced a $2.7 billion investment from Google while court documents detail systematic failures in its content moderation systems. According to legal filings submitted December 9, 2024, in the Eastern District of Texas, the platform's AI chatbots engaged in conversations promoting self-harm, suicide, and sexual exploitation with underage users.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Testing conducted by investigators revealed that accounts identifying as 13-year-old children readily accessed inappropriate content. The platform's chatbots, including one named "CEO," engaged in explicit conversations despite the declared minor status. Character.ai marketed itself to children under 13 until July 2024, maintaining a 12+ age rating in app stores.

The financial implications for the marketing industry continue expanding as platforms integrate these AI systems across advertising ecosystems. Meta announced in June 2025 that AI-powered advertising tools would expand globally, with Business AI testing across Instagram and Facebook Reels enabling customer conversations directly from advertisements. Marketing professionals tracking AI advertising developments have expressed concerns about brand safety implications as these systems integrate more deeply into advertising platforms.

Third-party verification companies including Adloox, DoubleVerify, and Scope3 developed specialized tools to help advertisers monitor and control their brand exposure on platforms using problematic AI systems. Brand safety concerns now encompass risks that advertisements might appear alongside inappropriate AI-generated content or be associated with platforms producing harmful interactions.

The enforcement landscape reflects broader international regulatory pressure. European courts recently confirmed that Meta's AI training processes children's personal data despite protective measures. A German ruling acknowledged inevitable capture of minors' information when adults share content containing children's data.

Meta consistently refused to sign European AI compliance frameworks. Chief Global Affairs Officer Joel Kaplan stated the company "won't be signing" the EU's voluntary AI code of practice due to "legal uncertainties for model developers."

Digital advertising platforms face mounting compliance challenges as privacy regulations expand. New COPPA rules that took effect June 23, 2025, require separate consent for third-party data sharing, enhanced transparency disclosures, and expanded definitions of child-directed services.

The Attorneys General letter warns companies they "will be held accountable" for their decisions. Social networks previously caused significant harm to children, they noted, partly because "government watchdogs did not do their job fast enough." However, the group emphasizes they will not repeat this mistake with AI technology. The message concludes with a direct warning to the American AI industry: "We wish you success in the race for AI dominance. But if you knowingly harm kids, you will answer for it."

Current AI chatbot policies distinguish between different age groups, with specific restrictions only applying to children under 13. This raises questions about adequate protection for teenagers who frequently use social media platforms. Brazilian authorities noted that Meta's platforms allow access to users from age 13, but no age verification filters prevent users between 13 and 18 from accessing inappropriate content.

The timing coincides with broader regulatory developments affecting AI deployment. The EU's Artificial Intelligence Act entered into force in August 2024, with most obligations beginning August 2, 2025. This legislation imposes significant obligations on providers and deployers of high-risk AI systems.

Several states including New York and Maine passed laws requiring disclosure that chatbots aren't real people. New York stipulates bots must inform users at conversation beginnings and at least once every three interactions. These requirements create additional compliance touchpoints where violations could occur.

The marketing community faces expanding regulatory complexity as AI systems integrate deeper into advertising operations. Recent analysis shows that 72% of marketers plan to increase their programmatic advertising investment in 2025, while simultaneously navigating privacy-compliant targeting methods and emerging media formats.

Connected TV advertising demonstrates particular strength in brand-building, with 80% of marketers identifying it as primarily achieving brand objectives. However, platforms using AI models trained on children's data face potential reputational risks, particularly among demographics showing strong opposition to such practices.

Industry participants must consider ethical implications of using advertising systems powered by AI models that may process children's personal information without adequate consent mechanisms. The enforcement actions demonstrate how AI-related compliance intersects with broader digital platform obligations affecting advertising operations.

The bipartisan coalition includes leading attorneys general Jonathan Skrmetti of Tennessee, Kwame Raoul of Illinois, Josh Stein of North Carolina, and Alan Wilson of South Carolina. The 44 jurisdictions represented in the August 25, 2025 letter include attorneys general from Alaska, American Samoa, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Indiana, Iowa, Kentucky, Louisiana, Maine, Massachusetts, Minnesota, Mississippi, Missouri, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Northern Mariana Islands, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Dakota, Utah, Vermont, Virginia, Washington, West Virginia, and Wyoming.

Character.ai maintains monthly operating costs of approximately $30 million, according to court documents, while having only about 139,000 paid subscribers as of December 2024. The platform would need approximately 3 million paying subscribers at $10 per month to cover current operating expenses.

The investigation reflects mounting pressure from lawmakers concerned about AI safety, particularly regarding vulnerable populations. The coordinated enforcement action represents the most comprehensive state-level challenge to AI chatbot companies over harm to minors, potentially setting precedents for how similar platforms might be regulated in the future.

Timeline

PPC Land explains

AI Chatbots: Artificial intelligence systems designed to simulate human conversation through text or voice interactions. These automated programs use machine learning algorithms to respond to user inputs and can be programmed with specific personalities or characteristics. In the context of child safety concerns, AI chatbots have demonstrated capabilities to engage in inappropriate conversations, including romantic roleplay and sexual content, with users who identify as minors. The technology presents unique risks because these systems can appear remarkably human-like through features such as typing indicators, speech patterns, and emotional responses.

Child Safety: The protection of minors from physical, emotional, and psychological harm in digital environments. This encompasses safeguarding children from exposure to inappropriate content, predatory behavior, and manipulative interactions that could negatively impact their development. In the AI context, child safety includes preventing chatbots from engaging in romantic or sexual conversations with users under 18, implementing age verification systems, and establishing content moderation protocols specifically designed to protect vulnerable young users from exploitation.

Brand Safety: The risk management practice ensuring advertisements and marketing content do not appear alongside inappropriate, harmful, or controversial material that could damage a company's reputation. For digital advertising platforms integrating AI systems, brand safety concerns extend beyond traditional content adjacency to include the underlying AI technologies powering advertising tools. Marketers worry about their brands being associated with AI systems that produce problematic interactions, leading to increased demand for third-party verification services and specialized monitoring tools.

Character.ai: An artificial intelligence chatbot platform that allows users to create and interact with AI-generated characters based on fictional personalities, celebrities, or custom personas. The platform faces multiple lawsuits alleging its AI systems engaged in harmful conversations with minors, including promoting self-harm and inappropriate sexual content. Financial documents reveal the company requires approximately 3 million paying subscribers to cover monthly operating costs of $30 million, while maintaining only about 139,000 paid subscribers as of December 2024.

Meta Platforms: The parent company of Facebook, Instagram, and WhatsApp, currently under investigation for AI policies that previously permitted chatbots to engage in romantic roleplay with children as young as eight. Internal company documents revealed approval from legal, public policy, and engineering teams for AI systems to describe children in terms indicating attractiveness. The company announced global expansion of AI-powered advertising tools in June 2025, integrating these systems across Instagram and Facebook Reels to enable direct customer conversations from advertisements.

COPPA (Children's Online Privacy Protection Act): Federal legislation governing the collection and use of personal information from children under 13 by online services and websites. Recent amendments taking effect June 23, 2025, require separate parental consent for third-party data sharing, enhanced transparency disclosures, and expanded definitions of child-directed services. Violations can result in civil penalties up to $43,792 per violation, creating significant financial exposure for non-compliant operators and requiring careful assessment of AI systems that may process children's data.

Attorneys General: Chief legal officers for states and territories who enforce consumer protection laws and investigate corporate misconduct. The bipartisan coalition of 44 attorneys general represents the most comprehensive state-level enforcement action against AI companies over child safety concerns. These officials possess authority to pursue civil penalties, injunctive relief, and criminal prosecution for violations of state consumer protection statutes, privacy laws, and deceptive trade practices regulations.

Content Moderation: The systematic review, filtering, and removal of inappropriate material from digital platforms, which becomes increasingly complex with AI-generated content. Traditional moderation systems rely on keyword detection and image recognition, but AI chatbots can circumvent these safeguards through context-aware responses and conversational manipulation. The challenge intensifies as AI systems learn to avoid detection while maintaining inappropriate interactions, requiring more sophisticated monitoring approaches and human oversight.

Regulatory Compliance: The process of adhering to laws, regulations, and industry standards governing business operations, particularly complex for AI companies operating across multiple jurisdictions. Current compliance requirements include COPPA provisions, state privacy laws, international frameworks like the EU's AI Act, and emerging age-appropriate design codes. Companies face mounting costs for legal assessment, technical implementation, and ongoing monitoring as regulatory requirements expand and enforcement actions increase across global markets.

Digital Advertising Ecosystem: The complex network of platforms, technologies, and service providers enabling targeted online advertising through data collection, audience segmentation, and automated bidding systems. AI integration throughout this ecosystem creates new compliance challenges as machine learning algorithms process personal information, generate content, and make targeting decisions. Marketing professionals must navigate privacy regulations, brand safety concerns, and ethical implications while maintaining advertising effectiveness in an increasingly automated environment.

Summary

Who: US Attorneys General from 44 jurisdictions targeting 12 major AI companies including Meta, Google, OpenAI, Character.ai, Anthropic, Apple, Microsoft, and others

What: Formal warning letter demanding protection of children from predatory artificial intelligence products, citing internal documents showing AI chatbots engaging in romantic roleplay with minors as young as eight

When: August 25, 2025 letter follows mounting investigations throughout 2024 and 2025, including Texas probes on December 12, 2024 and August 18, 2025, and Brazilian demands on August 15, 2025

Where: United States federal and state jurisdictions, with international pressure from Brazil, EU regulatory frameworks, and global compliance requirements affecting AI deployment

Why: Response to documented failures in protecting children from AI systems that engage in inappropriate conversations, promote self-harm, and process minors' personal data without adequate safeguards, building on previous social media harms to vulnerable populations