FTC orders seven AI chatbot companies to detail child safety measures

Description: Federal Trade Commission demands comprehensive data on monetization practices, age restrictions, and negative impact monitoring from artificial intelligence companion platforms through September 10, 2025, special reports resolution.

The Federal Trade Commission issued orders on September 10, 2025, requiring seven companies that provide consumer-facing AI-powered chatbots to submit detailed reports about their safety practices, data handling, and potential negative impacts on children and teenagers. The special reports, mandated under Section 6(b) of the FTC Act, mark the agency's most comprehensive investigation into AI companion platforms to date.

According to the FTC resolution titled "Resolution Directing Use of Compulsory Process to Collect Information from Companies that Offer Generative AI Companion Products or Services Regarding Their Advertising, Safety, and Data Handling Practices," companies must submit responses within 45 days of service. The investigation targets platforms that use generative artificial intelligence to simulate human-like communication and interpersonal relationships with users.

The FTC's inquiry focuses specifically on how these platforms affect children and the measures companies implement to mitigate potential negative impacts. AI chatbots can effectively mimic human characteristics, emotions, and intentions, and are designed to communicate like friends or confidants, potentially prompting users, especially children and teens, to trust and form relationships with them.

Companies receiving orders must provide comprehensive information about eight key areas of operation. The monetization requirements mandate disclosure of all revenue streams, including subscription fees, advertisements, in-app purchases, and data sharing arrangements. Platform operators must detail monthly revenue and profit figures broken down by user age groups.

The data collection demands encompass all personal information categories that companies collect, analyze, store, or transfer to third parties. This includes user inputs, conversation outputs, summaries, and inferences generated from user interactions. Companies must specify retention practices, deletion tools available to consumers, and access controls for sensitive information.

User engagement metrics form another critical component of the investigation. The FTC requires detailed statistics on daily and monthly active users, chat session frequency and duration, and the number of AI companion characters each user interacts with. These figures must be provided monthly and segmented by mutually exclusive age groups.

Age restriction compliance represents a particularly sensitive area of inquiry. Companies must describe their age-gating, age verification, and age estimation techniques, along with monitoring and enforcement practices for rule infractions. The investigation specifically examines how platforms respond when user inputs indicate the person may be a child or minor.

The orders require extensive documentation of pre-deployment and post-deployment safety assessments. Companies must detail how they identify, evaluate, and mitigate negative impacts before launching AI companion products or services. This includes descriptions of testing protocols, red team exercises, and mitigation measures considered for different age groups.

Character development and approval processes fall under scrutiny as well. Platforms must explain how they determine which AI companion characters to offer, their design processes, and criteria for discontinuing characters. The investigation examines both company-created and user-generated characters, requiring lists of the most popular characters ranked by user interaction metrics.

Complaint handling procedures must be documented comprehensively. Companies need to provide monthly statistics on reports received about outputs, broken down by common complaint topics and user age groups. Special attention focuses on complaints indicating users may be children or that minors have suffered negative impacts.

The investigation specifically addresses sexually themed conversations involving minors. Companies must describe their decision-making processes regarding sexual content, training approaches for AI systems, and statistics on sexually themed outputs directed at underage users.

Luis Alberto Montezuma, an International Data Spaces Facilitator, highlighted the significance of this enforcement action on LinkedIn, stating that the FTC is particularly interested in the impact on children and the actions companies take to mitigate potential negative impacts, limit children's use of these platforms, or comply with the Children's Online Privacy Protection Act Rule.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

The orders define AI companion products or services as computer programs accessible via websites, applications, or devices that use generative artificial intelligence to simulate human-like communication. These platforms typically offer users emotional support, social advice, professional services, or entertainment through behavioral and communicative profiles or personas.

Technical aspects of the investigation delve into AI model integration, including data corpus information for company-developed models and training methodologies. Companies must explain how their platforms process and respond to inputs, including system prompts, flags, filters, and personalization requests.

The definition of negative impacts encompasses any actual or potential adverse effects relating to outputs, usage patterns, design elements, or software architecture. Common complaint topics refer to the ten most frequent substantive areas raised in user reports regarding inputs, outputs, or platform usage.

Age group classifications span from children under 13 to users 25 and older, with specific categories for teens (13-17), minors (under 18), and young adults (18-24). Companies must provide separate data analysis for each demographic segment throughout their responses.

Research and development documentation requirements extend beyond basic metrics to include qualitative and quantitative studies, A/B tests, and behavioral data analyses. Companies must identify personnel responsible for research oversight, including job titles, technical expertise, and professional credentials.

The investigation builds on growing concerns about AI chatbot platforms' interactions with minors. Texas launched investigations into Character.AI and Meta for children's privacy violations, while 44 state attorneys general warned AI companies about accountability for child exploitation through predatory AI products.

International regulatory pressure accompanies domestic enforcement efforts. Brazil demanded Meta remove sexual chatbots with child-like personas, establishing compliance deadlines for platform operators. These coordinated actions demonstrate global concern about AI chatbot safety protocols.

The FTC's broader AI enforcement strategy includes Operation AI Comply, which targets companies using AI technology for deceptive practices. The September 2024 initiative resulted in multiple enforcement actions against firms exploiting AI hype or selling technology that enables fraud.

Platform monetization practices receive particular scrutiny given their potential connection to user engagement metrics. Companies must explain any associations between revenue generation and measurements of user engagement, including frequency and duration of chat sessions. Business strategy documents, marketing plans, and investor presentations form part of the required documentation.

The investigation examines third-party involvement in AI output generation and refinement. Companies must identify external parties contributing to content creation and provide relevant licensing agreements. This encompasses both technical integration partnerships and content moderation services.

Mitigation measures documentation spans prevention strategies, intervention protocols, and post-incident responses. Companies must detail automated and human review processes, keyword searches, alerts systems, and escalation procedures for sensitive issues. Default application settings, parental controls, and user time limits fall under this category.

Privacy impact assessments become mandatory disclosures under the orders. Companies must produce any evaluations related to personal information collection, use, analysis, storage, or transfer to third parties. This includes assessments of data handling differences between registered users, unregistered users, and subscribers.

The comprehensive nature of these orders reflects the FTC's commitment to understanding AI chatbot platforms' complete operational scope. From technical architecture to business models, safety protocols to user demographics, the investigation leaves few aspects of platform operation unexamined.

Confidential information submitted through these orders will be aggregated or anonymized in Commission reports, consistent with FTC Act provisions. Individual submissions marked confidential receive ten-day notice protection before potential disclosure, except as provided under federal transparency statutes.

The 45-day response deadline creates immediate compliance pressure for affected companies. Platform operators must contact Commission staff within 14 days to confirm information availability and identify any data outside their direct control.

This enforcement action positions the FTC at the forefront of AI chatbot regulation, establishing precedents for how similar platforms might face oversight in the future. The investigation's scope suggests regulatory authorities view child safety as paramount in AI companion platform evaluation.

Timeline

  • January 1, 2022 - Applicable time period begins for FTC investigation data collection requirements
  • September 2024 - FTC launches Operation AI Comply targeting deceptive AI practices across multiple sectors
  • December 12, 2024 - Texas Attorney General launches investigations into Character.AI and 14 other companies under SCOPE Act
  • August 15, 2025 - Brazil issues formal notification to Meta demanding removal of sexual chatbots with child-like personas
  • August 18, 2025 - Texas expands investigations targeting AI chatbot platforms for deceptive mental health services marketing
  • August 25, 2025 - 44 state attorneys general sign formal letter demanding enhanced child protection from AI companies
  • September 2, 2025 - FTC sues robot toy maker Apitor for children's privacy violations involving unauthorized geolocation data collection
  • September 10, 2025 - FTC issues special report orders to seven AI chatbot companies under Section 6(b) authority targeting Alphabet Inc., Character Technologies Inc., Instagram LLC, Meta Platforms Inc., OpenAI OpCo LLC, Snap Inc., and X.AI Corp.
  • September 11, 2025 - FTC announces investigation publicly with Chairman Andrew N. Ferguson emphasizing child protection priorities under Trump-Vance administration
  • October 25, 2025 - 45-day compliance deadline for special report submissions (approximate date based on service timing)

Summary

Who: The Federal Trade Commission issued orders to seven companies providing consumer-facing AI-powered chatbots, targeting platforms that simulate human-like communication with users.

What: Comprehensive special reports requiring detailed information about monetization practices, age restrictions, data handling, safety assessments, complaint procedures, and negative impact monitoring for AI companion platforms.

When: September 10, 2025, with companies required to submit responses within 45 days of service under FTC Section 6(b) authority.

Where: United States federal jurisdiction, focusing on platforms accessible to users presumed to be located in the United States through web browsers, mobile applications, or devices.

Why: Growing concerns about AI chatbots' potential negative impacts on children and teenagers, particularly regarding platforms designed to mimic human relationships and trust-building behaviors that may exploit vulnerable users.