Texas launches investigations into Character.AI and Meta for children's privacy

Texas Attorney General Ken Paxton opens multiple probes targeting AI chatbots and social platforms for deceptive mental health claims and privacy violations.

Digital privacy protection concept showing AI chatbot regulation and children's safety enforcement measures.
Digital privacy protection concept showing AI chatbot regulation and children's safety enforcement measures.

Texas Attorney General Ken Paxton has initiated comprehensive investigations targeting artificial intelligence platforms, including Character.AI and Meta, for potentially violating state privacy laws and misleading children through deceptive AI-generated mental health services. The investigations, announced on December 12, 2024, and August 18, 2025, represent Texas's most aggressive enforcement action against AI companies to date.

According to the December 12, 2024 press release, Paxton launched investigations into Character.AI and fourteen other companies including Reddit, Instagram, and Discord regarding their privacy and safety practices for minors pursuant to the Securing Children Online through Parental Empowerment (SCOPE) Act and the Texas Data Privacy and Security Act (TDPSA). The SCOPE Act prohibits digital service providers from sharing, disclosing, or selling a minor's personal identifying information without permission from the child's parent or legal guardian.

Eight months later, on August 18, 2025, Paxton expanded his enforcement focus with a targeted investigation into artificial intelligence chatbot platforms, including Meta AI Studio and Character.AI, for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools. "These platforms may be utilized by vulnerable individuals, including children, and can present themselves as professional therapeutic tools, despite lacking proper medical credentials or oversight," according to the Attorney General's office.

The investigations come as German court rulings confirm Meta's AI training includes children's data despite announced protections, highlighting systematic failures in regulatory coordination that leave children's privacy rights inadequately protected across multiple jurisdictions.

Technical violations and deceptive practices identified

According to Paxton's office, AI-driven chatbots "often go beyond simply offering generic advice and have been shown to impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counseling services." The investigation documents indicate that while AI chatbots assert confidentiality, their terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development.

The TDPSA imposes strict notice and consent requirements on companies that collect and use minors' personal data. The protections of these laws extend to how minors interact with AI products, creating complex compliance requirements for platforms operating AI-powered services.

Paxton has issued Civil Investigative Demands (CIDs) to the companies involved to determine if they have violated Texas consumer protection laws, including those prohibiting fraudulent claims, privacy misrepresentations, and the concealment of material data usage. The demands require companies to provide detailed information about their data collection practices, AI training methodologies, and safeguards protecting minor users.

Broader regulatory enforcement pattern

The Character.AI and Meta investigations follow Paxton's recent lawsuit against TikTok for operating in violation of the safety and privacy requirements of the SCOPE Act. This year, the Attorney General launched what his office describes as "the largest data privacy and security initiative of any State AG office."

Major enforcement actions include a lawsuit against General Motors for illegally surveilling drivers, collecting driver data, and sharing it with insurance companies. In July, Paxton secured a historic $1.4 billion settlement for the State of Texas with Meta (formerly Facebook) for unlawfully collecting and using facial recognition data—the largest settlement ever obtained from an action brought by a single state.

The timing of these investigations coincides with increased regulatory scrutiny of AI platforms across multiple jurisdictions. Consumer trust research indicates 59% oppose AI training use while demanding clearer data controls, while 88% of advertisers expect significant changes due to privacy regulations.

Marketing industry implications

The investigations create significant implications for marketing professionals working with AI-powered advertising platforms and family-oriented brands. Platform targeting and optimization systems now face heightened scrutiny when processing children's information, affecting advertising delivery and audience analysis capabilities.

The enforcement actions demonstrate how AI-related compliance intersects with broader digital platform obligations affecting advertising operations. Marketing teams must consider ethical implications of using advertising systems powered by AI models that may process children's personal information without adequate consent mechanisms.

Recent analysis shows European AI regulation becoming increasingly complex for digital advertising, with marketing organizations facing new compliance requirements for AI-powered advertising tools, optimization systems, and customer targeting platforms.

Brand safety considerations expand beyond traditional content adjacency to include the underlying AI systems powering advertising platforms. Companies advertising on platforms using AI models trained on children's data face potential reputational risks, particularly among demographics showing strong opposition to such practices.

Federal regulatory context

The Texas investigations unfold amid broader federal regulatory activity. The Federal Trade Commission proposed new changes to the Children's Online Privacy Protection Rule (COPPA) that would place new restrictions on the use and disclosure of children's personal information, requiring separate opt-in consent for targeted advertising.

Technology companies implementing age verification systems face criticism from privacy advocates, with Google's Global Director of Privacy Safety and Security Policy warning that Meta's proposed app store-based verification approach would "require the sharing of granular age band data with millions of developers who don't need it."

Platform partnerships with AI companies also face scrutiny. Mattel's announced partnership with OpenAI to develop AI-powered toys and experiences highlights the complexity of integrating artificial intelligence into children's products while maintaining compliance with federal regulations including the Children's Online Privacy Protection Act (COPPA).

International enforcement coordination

The Texas investigations reflect a broader pattern of international regulatory coordination on children's digital privacy. German data protection authorities have published comprehensive guidelines for AI systems covering the entire development lifecycle from design to operation.

The Netherlands published its fifth AI and Algorithms Report outlining a regulatory sandbox initiative operational by August 2026, providing supervised testing environments for AI systems under the European AI Act.

European enforcement actions demonstrate regulatory complexity marketing professionals face across jurisdictions. The European Commission preliminarily found TikTok's advertising repository violating Digital Services Act provisions, while the UK's Online Safety Act sparked a 1,400% surge in VPN usage as users seek technical workarounds to avoid new verification requirements.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Industry response and technical challenges

The investigations highlight fundamental challenges in regulating multifunctional AI systems. Research from the University of Pennsylvania Carey Law School published December 16, 2024, examines the intricate challenges of regulating artificial intelligence systems that can perform multiple functions, comparing multifunctional AI to a Swiss army knife with numerous possible applications.

Platform companies face varying approaches to regulatory compliance. Google committed to EU AI code amid compliance concerns, while Meta announced its refusal to participate in voluntary guidelines, citing "legal uncertainties for model developers."

The enforcement actions create implementation challenges for smaller platforms and nonprofit organizations, with compliance costs associated with age verification systems creating potential barriers to entry. Established platforms can distribute compliance costs across large user bases, while smaller platforms face proportionally higher regulatory burdens.

Technical implementation requirements

Current enforcement patterns suggest marketing teams must develop new approaches for privacy-compliant audience targeting. Programmatic advertising growth reached 72% in 2025 despite privacy constraints, with 41% of marketers adopting contextual targeting as their primary solution and 40% implementing first-party data strategies.

The investigations emphasize the need for enhanced age verification mechanisms, parental consent systems, and technical safeguards that require regulatory mandates rather than voluntary industry implementation. Marketing technology providers must ensure their AI-powered tools incorporate adequate protections for minor users throughout the advertising delivery pipeline.

Platform enforcement mechanisms continue expanding beyond traditional boundaries. Amazon implemented AI-powered cross-platform compliance monitoring that scans brand websites for marketplace guideline violations, representing significant expansion of compliance monitoring capabilities that may influence other platform approaches.

"Technology companies are on notice that my office is vigorously enforcing Texas's strong data privacy laws," said Attorney General Ken Paxton. "These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm."

The investigations mark a significant escalation in state-level enforcement against AI platforms, establishing Texas as a leader in digital privacy enforcement while creating new compliance requirements for companies operating AI-powered services that interact with minor users.

Timeline

  • December 12, 2024: Texas Attorney General Ken Paxton launches investigations into Character.AI and fourteen other companies including Reddit, Instagram, and Discord for privacy and safety practices regarding minors
  • August 18, 2025: Paxton opens additional investigation into Meta AI Studio and Character.AI for potentially engaging in deceptive trade practices and misleadingly marketing as mental health tools
  • July 2024: Paxton secured historic $1.4 billion settlement with Meta for unlawfully collecting facial recognition data
  • July 4, 2025Snapchat launches Family Safety Hub in UAE partnership featuring comprehensive parental controls
  • July 11, 2025EU publishes final General-Purpose AI Code of Practice as August 2025 compliance deadline approaches
  • July 15, 2025Netherlands publishes fifth AI and Algorithms Report detailing regulatory sandbox launch by 2026
  • August 2025: AI Act obligations for general-purpose AI models take effect in European Union

PPC Land explains

AI Platforms: Digital services that deploy artificial intelligence systems to interact with users, including chatbots, virtual assistants, and automated content generation tools. These platforms face increasing regulatory scrutiny as they expand beyond simple automation to complex interactions that may mimic human professionals or collect sensitive personal data, particularly from vulnerable populations like children.

SCOPE Act: The Securing Children Online through Parental Empowerment Act, a Texas state law that prohibits digital service providers from sharing, disclosing, or selling a minor's personal identifying information without explicit permission from the child's parent or legal guardian. The act also requires companies to provide parents with tools to manage and control privacy settings on their child's accounts.

Character.AI: An artificial intelligence platform that allows users to create and interact with AI-powered chatbots designed to simulate conversations with fictional characters or real personalities. The platform faces investigation for potentially presenting itself as a professional therapeutic tool while lacking proper medical credentials and for collecting minors' personal data without adequate parental consent.

Privacy Laws: Legal frameworks designed to protect personal information from unauthorized collection, use, or disclosure, with enhanced protections for vulnerable populations including children. These laws create compliance obligations for technology companies and establish enforcement mechanisms for violations, with penalties ranging from financial sanctions to operational restrictions.

Children's Data: Personal information collected from individuals under 18 years of age, which receives enhanced legal protection due to minors' limited capacity to understand privacy implications and provide informed consent. This category includes behavioral patterns, educational records, health information, location data, and interaction histories that AI systems often process for training and optimization purposes.

Marketing Professionals: Individuals and organizations responsible for advertising strategy, campaign execution, and customer engagement across digital platforms. They face increasing complexity navigating privacy regulations while maintaining effective audience targeting, particularly as AI-powered advertising tools incorporate data that may include information from minor users without adequate consent mechanisms.

Enforcement Actions: Legal proceedings initiated by regulatory authorities to address violations of privacy laws, consumer protection statutes, or industry-specific regulations. These actions can include investigations, civil penalties, operational restrictions, and court-ordered changes to business practices, creating precedents that influence industry-wide compliance approaches.

Meta: Formerly Facebook, a technology company operating social media platforms including Facebook, Instagram, and WhatsApp, along with AI services like Meta AI Studio. The company faces multiple investigations for privacy violations, including a $1.4 billion settlement with Texas for facial recognition data collection and current probes regarding children's data processing in AI training systems.

Regulatory Compliance: The process of ensuring business operations conform to applicable laws, regulations, and industry standards. For AI platforms and digital advertising, this includes implementing technical safeguards, obtaining proper consent for data collection, maintaining transparent privacy policies, and establishing oversight mechanisms to prevent violations of children's privacy rights.

Texas Attorney General: The state's chief legal officer responsible for enforcing consumer protection laws, privacy regulations, and other statutes within Texas jurisdiction. Ken Paxton currently holds this position and has initiated what his office describes as the largest data privacy and security enforcement initiative of any state attorney general, targeting major technology companies for violations affecting Texas residents.

Summary

Who: Texas Attorney General Ken Paxton, Character.AI, Meta, Reddit, Instagram, Discord, and fourteen other companies

What: Comprehensive investigations into AI platforms and social media companies for potential violations of Texas privacy laws including the SCOPE Act and TDPSA, focusing on deceptive mental health claims and children's data protection

When: Initial investigations announced December 12, 2024, with expanded AI-focused probe announced August 18, 2025

Where: Texas, with implications for companies operating AI-powered services accessible to Texas residents and minors

Why: To ensure compliance with state laws designed to protect children from exploitation and harm through AI systems that may impersonate licensed mental health professionals, fabricate qualifications, and collect personal data without adequate parental consent