Brazil takes legal action against Meta over AI chatbots targeting children
Meta's 72-hour deadline to remove sexual chatbots with child-like personas from its platforms has passed, following Brazilian authorities' child protection demands.

Brazil's Federal Attorney General's Office (AGU) issued a formal extrajudicial notification to Meta Platforms Inc. on August 15, 2025, demanding immediate removal of artificial intelligence chatbots that simulate child profiles and engage in sexual conversations with users. The notification gave Meta 72 hours to comply with demands, a deadline that expired on August 18, 2025.
According to the official document NUP 00170.003528/2025-45, the legal action stems from investigations by Brazil's National Union Prosecutor's Office for Democracy Defense (PNDD), requested by the Presidency's Social Communication Secretariat (Secom). The case builds on reports from Reuters news agency and Núcleo Journalism that revealed how Meta's artificial intelligence systems permitted sexual conversations with children.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The AGU's notification specifically targets chatbots created through Meta's "Meta AI Studio" tool, which allows users to develop AI-powered conversational agents across Instagram, Facebook, and WhatsApp. Brazilian authorities conducted tests on chatbots named "Safadinha," "Bebezinha," and "Minha Novinha," all of which demonstrated patterns of sexual conversation with AI systems programmed to simulate children.
Technical details of the violations
Meta AI Studio provides users with tools to create custom chatbots that can engage in simulated conversations across the company's platform ecosystem. The Brazilian investigation focused on three specific chatbots that maintained sexualized personas while presenting child-like characteristics through their names and conversation patterns.
Screenshots included in the legal documentation show conversations where these chatbots engaged in explicit sexual discussions, describing physical attributes and engaging in role-playing scenarios of a sexual nature. The chatbots were accessible to users aged 13 and above, matching Meta's minimum age requirements across its platforms.
"Such chatbots have the potential to reach an increasingly broad audience on digital platforms, especially on Meta's social networks, exponentially amplifying the risk of minors' contact with sexually suggestive and potentially criminal material," states the AGU notification.
The Brazilian authorities note that Meta's platforms allow access to users from age 13, but no age verification filters prevent users between 13 and 18 from accessing inappropriate content like these chatbots. This creates a regulatory gap where minors can interact with sexually explicit AI systems designed to simulate children.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Legal framework and constitutional violations
Brazil's legal action centers on Article 227 of the Federal Constitution, which establishes the duty of family, society, and the state to ensure children's comprehensive protection. The AGU argues that these chatbots violate fundamental constitutional protections for minors and contradict Meta's own Community Standards.
The Child and Adolescent Statute (Law 8.069/1990) provides the legal foundation for the AGU's demands. Article 3 of this statute guarantees children and adolescents all fundamental human rights, ensuring opportunities and facilities for physical, mental, moral, spiritual, and social development under conditions of freedom and dignity.
The notification references Article 217-A of Brazil's Penal Code, which criminalizes sexual acts with minors under 14 years old, carrying penalties of 8 to 15 years imprisonment. Brazilian authorities argue that this legal framework extends to simulated sexual interactions through artificial intelligence systems.
"The concept of 'libidinous act' is not restricted to carnal conjunction, encompassing all conduct of a sexual nature aimed at satisfying desire, whether of the agent themselves or of a third party, regardless of direct physical contact," the legal document explains.
Meta's community standards violations
The AGU's notification demonstrates that these chatbots violate Meta's own Community Standards, which prohibit content involving child eroticization or sexual exploitation. Meta's policies specifically ban "engaging in implicitly sexual conversations in private messages with children" and content that "constitutes or facilitates inappropriate interactions with children."
Meta's Community Standards define several prohibited categories relevant to this case:
- Sexual exploitation, abuse, or nudity involving children
- Content involving children in sexual fetish contexts
- Content supporting, promoting, defending, or encouraging pedophilia participation
- Inappropriate interactions with children through implicitly sexual private conversations
- Content that sexualizes real or fictional children
The Brazilian investigation revealed that chatbots consistently violated these standards while remaining accessible through Meta's platforms. The AGU argues this demonstrates inadequate enforcement of the company's own policies.
Supreme Court precedent on platform liability
The notification cites a recent Brazilian Supreme Federal Court (STF) decision regarding Article 19 of the Civil Rights Framework for the Internet. This ruling established that internet application providers must be held responsible for third-party generated content when they have clear knowledge of illegal acts but fail to immediately remove such content.
The Supreme Court decision, detailed in RE 1037196, recognized "partial and progressive unconstitutionality" of Article 19's previous interpretation. The new framework requires platforms to demonstrate proactive content moderation for serious illegal content, particularly involving crimes against children and adolescents.
"While no new legislation emerges, Article 19 of the Civil Internet Framework must be interpreted so that internet application providers are subject to civil liability," according to the Supreme Court ruling cited in the AGU notification.
Wider context of AI chatbot regulation
This legal action occurs amid growing global scrutiny of AI chatbot platforms and their interactions with minors. Recent investigations revealed that Meta's internal guidelines previously permitted AI chatbots to engage children in "romantic or sensual" conversations, prompting a U.S. Congressional investigation led by Senator Josh Hawley.
Internal Meta documents obtained by Reuters showed that the company's legal, public policy, and engineering teams, including its chief ethicist, had approved standards allowing AI systems to describe children in terms indicating their attractiveness. Meta confirmed these policies existed but stated they were removed after media attention.
The Brazilian case adds international regulatory pressure to existing U.S. investigations. Similar concerns emerged with Character.ai, where court documents detailed AI chatbots engaging in conversations promoting self-harm and sexual exploitation with underage users.
Marketing industry implications
The legal action highlights critical brand safety concerns for advertisers using Meta's platforms. Marketing professionals worry about brand association with AI systems producing problematic interactions, leading to increased demand for third-party verification tools from companies like Adloox, DoubleVerify, and Scope3.
Content moderation challenges compound these concerns as AI-generated content becomes increasingly difficult to monitor at scale. Meta's acknowledgment of inconsistent enforcement highlights the complexity of moderating AI chatbot interactions across multiple languages and cultural contexts.
The case also reflects broader tensions around AI content monetization on social platforms. Meta's Creator Bonus Program and similar monetization structures create economic incentives for AI content creation, potentially overwhelming moderation systems designed for human-generated content.
Enforcement demands and timeline
The AGU's notification established a 72-hour deadline for Meta to comply with specific demands, which expired on August 18, 2025:
- Immediate removal of chatbots using child-like language to promote sexual content, specifically including "Bebezinha" (user 071_araujo0), "Minha novinha" (user da_pra_mim_no12), and "Safadinha" (user allysson_eduarduh)
- Clarification of measures being adopted within Meta AI's utilization scope, including integration with Facebook, Instagram, and WhatsApp, to prevent children and adolescents' access to sexual or erotic content
The notification, signed by Federal Union Advocates Maria Beatriz de Menezes Costa Oliveira and Raphael Ramos Monteiro de Souza on August 15, 2025, establishes legal precedent for international action against AI chatbot platforms.
Brazilian authorities emphasize that the situation represents "not mere misuse of technology, but a concrete and systemic threat to comprehensive protection of children and adolescents, requiring swift, articulated, and effective response from competent bodies."
Meta's content moderation evolution
This legal challenge emerges as Meta undergoes significant content moderation policy changes. The company recently dismantled its third-party fact-checking program in favor of a community notes system, citing high error rates in enforcement decisions.
Meta's internal metrics revealed the company was removing millions of content pieces daily as of December 2024, with potentially 10-20% of enforcement actions being mistakes. This high error rate contributed to the policy shift toward reduced automated enforcement, focusing primarily on illegal content and high-severity violations.
The Brazilian case tests whether Meta's new approach to content moderation can adequately address AI-generated content that exploits children. The company's emphasis on reduced enforcement conflicts with demands for more proactive removal of harmful AI chatbots.
Deadline expires with limited public response
The 72-hour deadline established by Brazilian authorities expired on August 18, 2025, with no immediate public confirmation of Meta's compliance status regarding the removal demands. According to media reports published on August 19, 2025, the AGU's request does not include sanctions, but the agency said it had reminded Meta that online platforms in Brazil must take down illicit content created by their users, even without a court order.
The lack of immediate sanctions reflects the extrajudicial nature of the notification, which serves as a formal warning before potential legal proceedings. However, the timing coincides with broader regulatory pressure on Meta regarding child safety concerns and content moderation practices across multiple jurisdictions.
The government action comes at a time of outrage in the South American nation over a case of alleged child sexual exploitation by Hytalo Santos, a well-known influencer who posted content on Instagram featuring partially naked minors taking part in suggestive dances. This case demonstrates heightened sensitivity to child exploitation issues on social media platforms in Brazil.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- July 23, 2025: Núcleo Journalism publishes investigation revealing Meta AI chatbots simulating sexualized children
- August 14, 2025: Reuters reports Meta's AI authorized to have sexual conversations with children
- August 15, 2025: Brazilian AGU issues 72-hour notification to Meta demanding chatbot removal
- August 15, 2025: U.S. Senator Josh Hawley initiates Congressional investigation into Meta's AI policies
- August 18, 2025: 72-hour deadline expires for Meta's compliance with Brazilian demands
- August 18, 2025: Brazilian AGU publishes official notification details
- August 19, 2025: Multiple international news outlets report on Brazilian government demands without confirmation of Meta's response
Related Stories
- Meta faces investigation over AI bots talking to kids inappropriately - Details U.S. Congressional investigation and Meta's internal AI guidelines
- Character.ai under pressure after disturbing conversations with minors surface - Similar legal challenges facing AI chatbot platforms
- Platform payments fuel AI slop flood across social media - How monetization programs drive problematic AI content creation
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
PPC Land explains
Meta AI Studio: Meta's artificial intelligence platform that enables users to create custom chatbots across Instagram, Facebook, and WhatsApp. This tool democratizes AI chatbot creation but lacks sufficient safeguards to prevent the development of inappropriate content targeting minors. The platform's accessibility to general users without proper oversight mechanisms has created vulnerabilities that malicious actors can exploit to create sexually explicit chatbots simulating children.
AGU (Advocacia-Geral da União): Brazil's Federal Attorney General's Office, the government body responsible for legal representation of the Union in judicial and administrative matters. As the primary legal defense institution for the Brazilian federal government, AGU plays a crucial role in enforcing constitutional protections and federal legislation. The organization's involvement in this case demonstrates the Brazilian government's commitment to protecting children from digital exploitation through formal legal channels.
Chatbots: Computer programs designed to simulate human conversation through artificial intelligence, capable of engaging users in text-based interactions across social media platforms. In this context, chatbots represent a significant technological advancement that can be misused to create harmful content. The AI-powered nature of these systems allows them to generate unlimited variations of content, making traditional content moderation approaches inadequate for preventing abuse.
Brazilian Constitution Article 227: The fundamental legal provision establishing that family, society, and the state must ensure comprehensive protection for children and adolescents with absolute priority. This constitutional article forms the legal foundation for Brazil's child protection framework and serves as the primary basis for the legal action against Meta. The article's comprehensive scope covers protection from all forms of negligence, discrimination, exploitation, violence, cruelty, and oppression.
Child protection laws: The comprehensive legal framework designed to safeguard minors from exploitation, abuse, and inappropriate content exposure across all media platforms. These laws have evolved to address digital threats as technology advances, requiring platforms to implement age-appropriate content filtering and safety measures. The enforcement of these protections becomes particularly challenging with AI-generated content that can circumvent traditional detection methods.
Content moderation: The systematic process of reviewing, filtering, and removing inappropriate material from digital platforms to ensure compliance with community standards and legal requirements. This complex undertaking becomes exponentially more difficult with AI-generated content that can produce unlimited variations and bypass automated detection systems. Meta's recent policy changes toward reduced enforcement create additional challenges for maintaining adequate protection standards.
Sexual exploitation: Criminal activities involving the abuse of children through sexual content creation, distribution, or interaction, now extending to AI-simulated scenarios that normalize inappropriate relationships with minors. The digital evolution of exploitation includes chatbots designed to engage children in sexual conversations, creating psychological harm and potentially grooming victims for real-world abuse. Legal frameworks are adapting to address these technological manifestations of traditional crimes.
Community Standards: Meta's internal policies governing acceptable content and behavior across its platforms, designed to balance free expression with user safety and legal compliance. These standards explicitly prohibit content involving child exploitation but require consistent enforcement mechanisms to remain effective. The Brazilian case highlights gaps between policy documentation and practical implementation, particularly regarding AI-generated content that violates these standards.
Platform liability: The legal responsibility of technology companies for content created, shared, or facilitated through their systems, particularly regarding harmful material targeting vulnerable populations. Recent court decisions, including Brazil's Supreme Court ruling on internet platform responsibilities, have expanded this liability to include proactive content monitoring obligations. Companies must now demonstrate active efforts to identify and remove illegal content rather than relying solely on user reporting mechanisms.
PNDD (Procuradoria Nacional da União de Defesa da Democracia): Brazil's National Union Prosecutor's Office for Democracy Defense, a specialized legal institution focused on protecting democratic institutions and fundamental rights. This organization plays a critical role in addressing threats to democratic values, including the protection of vulnerable populations from exploitation. The PNDD's involvement demonstrates that child protection issues are viewed as fundamental to maintaining democratic society's integrity and constitutional order.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Brazil's Federal Attorney General's Office (AGU) took legal action against Meta Platforms Inc., the company controlling Instagram, Facebook, and WhatsApp.
What: Brazilian authorities issued a 72-hour ultimatum demanding Meta remove AI chatbots that simulate child profiles and engage in sexual conversations with users, citing violations of child protection laws and constitutional guarantees.
When: The extrajudicial notification was issued on August 15, 2025, giving Meta until August 18 to comply with removal demands.
Where: The legal action was filed in Brazil through the National Union Prosecutor's Office for Democracy Defense (PNDD), targeting Meta's global operations across its social media platforms.
Why: The action stems from investigations showing Meta AI Studio-created chatbots violated Brazilian child protection laws, Meta's own Community Standards, and constitutional protections for minors, creating risks for children's psychological integrity and institutional harm.