German court confirms Meta AI training includes children's data despite protections

Schleswig-Holstein ruling acknowledges Meta's AI models process minors' personal information through public posts, highlighting failures in data protection measures for vulnerable populations.

EU court ruling exposes how Meta's AI training processes children's data despite protective measures and age restrictions.
EU court ruling exposes how Meta's AI training processes children's data despite protective measures and age restrictions.

The Schleswig-Holstein Higher Regional Court confirmed on August 12, 2025, that Meta's artificial intelligence training program processes personal data of children and adolescents, despite company claims of implementing protective measures. According to court documents filed under case number 6 UKI 3/25, the 6th Civil Senate found Meta's AI systems inevitably capture minors' information when adults share content containing children's data on Facebook and Instagram.

The court dismissed a preliminary injunction request from a Dutch charitable foundation, not because Meta's practices were lawful, but because the plaintiff waited too long after the April 14 announcement to seek emergency relief. This procedural ruling allows continued processing of children's data while acknowledging serious privacy violations affecting Europe's most vulnerable digital users.

Technical analysis reveals children's data exposure

Court documentation provides detailed analysis of how Meta's AI training captures children's information despite announced age restrictions. "The announcement of the AI initiative involves the processing personal data of children, adolescents, and other third parties, including sensitive personal data," the ruling states. This processing occurs when such data appears in public contributions or comments on adult users' profiles, regardless of whether children or their parents provided consent.

The court identified specific scenarios where children's data enters AI training systems. Adult users frequently share photos, videos, and text content featuring minors in public posts that become subject to AI processing. Comments sections on adult profiles often contain children's personal information shared by family members, teachers, or community members. Public images tagged with children's names or containing identifiable minors become training data despite protective intent.

Meta acknowledged to the court that while AI models are designed not to output personal data, "it is not entirely impossible for such data to be released." This admission confirms children's personal information persists within AI model parameters and training datasets, creating ongoing privacy risks for minors who never consented to data processing.

De-identification measures fail to protect minors

The court found Meta's technical safeguards insufficient to prevent children's data inclusion in AI training. Despite implementing de-identification and tokenization measures coordinated with Data Protection Authorities, these protections "did not effectively prevent the inclusion of personal data of unregistered users, children, adolescents, or sensitive personal data in the training records for an AI model."

Technical analysis revealed fundamental flaws in Meta's protective approach. De-identification and tokenization can be reversed through scenarios including reprogramming AI model output prompts or evaluating learning datasets directly. The court emphasized that when AI models can output personal data despite programming designed to prevent such occurrences, this "indicates to the Court that the personal data is indeed present within the AI model and the training dataset."

Children face particular vulnerability because they cannot determine if their personal data or images are being used unlawfully in AI training, unlike adults who might identify their information in search engine results. The court noted this creates "a violation of Art. 9 GDPR, particularly concerning the data of unregistered users included in public contributions, comments, and images."

Hamburg authorities confirm ongoing violations

The Hamburg Data Protection Authority, according to court records, confirmed that Meta's processing of children's data constitutes ongoing GDPR violations. Since April 14, 2025, and continuing through the court proceedings, authorities identified "a violation of Art. 9 GDPR, particularly concerning the data of unregistered users included in public contributions, comments, and images."

These violations specifically affect children whose information appears in content shared by adults without minors' knowledge or consent. The authority noted processing sensitive data from non-registered users, including children, whose information was published without their consent "typically violates Art. 9 GDPR and cannot be justified under Legitimate Interest."

The court emphasized this issue was apparent well before Meta's May 27 implementation date, creating opportunities for protective action that were not pursued by competent authorities. Children continue experiencing data processing violations while regulatory mechanisms fail to provide adequate emergency protection.

Educational institutions face particular exposure

Court analysis identified institutional profiles as creating specific risks for children's data processing. Kindergartens, schools, and youth organizations maintain Facebook and Instagram accounts that frequently feature children's information, images, and activities. The court noted that while these institutions employ adult users to manage profiles, content regularly includes children's personal data without individual consent.

"It can be assumed that the natural persons responsible for registering and maintaining such profiles are adult users," the court observed, making institutional content subject to AI training regardless of children's privacy interests. Educational institutions sharing student achievements, classroom activities, or event photos create extensive datasets of children's information that enters Meta's AI training systems.

The court found no evidence that Meta excludes institutional profiles from AI training, despite claims about protecting children's data. Email notifications to users stated processing would include "posts and comments from accounts whose owners are at least 18 years of age" without distinguishing between personal and institutional account types.

Family content creates comprehensive child surveillance

Meta's approach to family content enables systematic collection of children's personal information across multiple contexts. Adult family members regularly share photos, videos, and detailed information about children's activities, achievements, health conditions, and personal characteristics in public posts that become AI training data.

Birthday celebrations, school events, medical updates, and daily activities shared by parents create detailed profiles of children's lives within AI training datasets. Extended family members, teachers, and community members contribute additional data points through comments, reactions, and shared content featuring the same children across multiple adult accounts.

The court noted that children face particular vulnerability because their personal information appears across numerous adult profiles without their knowledge or ability to control processing. Unlike adults who might discover their data usage, children lack awareness of how their information enters commercial AI systems or mechanisms to protect their privacy rights.

Age verification failures expose systemic problems

Court documentation reveals fundamental problems with Meta's age verification systems that allow children's data to enter AI training. While the company claims to exclude content from users under 18, technical implementation fails to identify children's information that appears in adult-managed content.

The court found no effective measures to distinguish between adult personal content and content featuring children on adult accounts. Meta's AI training systems process all public content from verified adult accounts regardless of whether images, text, or comments contain children's personal information.

Institutional accounts managed by adults regularly feature children's information that becomes subject to AI training under current age verification approaches. Schools, youth sports teams, and community organizations share extensive content about children through adult-verified accounts that Meta's systems cannot distinguish from purely adult content.

The court's ruling exposes fundamental inadequacies in current legal frameworks for protecting children's digital privacy. While GDPR Article 9 provides enhanced protections for sensitive personal data, enforcement mechanisms fail to provide timely protection when children's information enters commercial AI systems.

Emergency injunction procedures require urgency demonstrations that favor rapid commercial implementation over child protection. The court acknowledged children's data processing likely violates European privacy law but procedural timing requirements prevented protective intervention.

Children lack standing to pursue individual legal remedies for AI training violations, depending on adult advocates or regulatory authorities to protect their interests. This dependency creates enforcement gaps when advocacy organizations face procedural barriers or authorities decline immediate intervention.

Research confirms broader child privacy risks

Recent academic research demonstrates that large language models themselves qualify as personal data under GDPR, creating additional obligations for protecting children's information throughout AI development lifecycles. University of Tübingen researchers found AI models memorize training data to varying degrees, meaning children's information persists within model parameters beyond initial processing.

This memorization creates ongoing privacy risks as children's data becomes embedded within AI systems used for multiple commercial purposes. Educational content, family photos, and personal information about minors becomes permanent elements of AI models that power advertising, content recommendation, and user interaction systems.

The research indicates that every action performed on AI models containing children's data constitutes processing under GDPR definitions, including training, uploading, downloading, storing, deploying, fine-tuning, or sharing models across platforms and applications.

Regulatory coordination failures leave children vulnerable

The case reveals systematic failures in regulatory coordination that leave children's privacy rights inadequately protected. While the Hamburg Data Protection Authority identified ongoing violations affecting children, procedural requirements prevented German courts from providing emergency relief.

The Irish Data Protection Commission, serving as Meta's lead supervisory authority under GDPR's one-stop-shop mechanism, has announced monitoring intentions rather than immediate enforcement proceedings. This approach allows continued processing of children's data while regulatory oversight develops slowly.

European data protection authorities lack coordinated mechanisms for addressing urgent child privacy violations by major technology platforms. Technical complexity of AI training combined with procedural requirements for emergency relief creates enforcement delays that particularly disadvantage vulnerable populations including children.

International implications for child digital rights

Meta's AI training program affects children across European Union member states, creating cross-border implications for digital rights protection. Children in Germany, France, Italy, and other EU countries experience identical data processing through Meta's centralized AI development approach coordinated from Ireland.

The court's findings establish precedent that children's data inevitably enters AI training systems despite announced protective measures, creating obligations for enhanced regulatory oversight across European jurisdictions. National authorities must develop coordinated responses to protect children's digital privacy rights from commercial AI development.

Marketing ecosystem implications for child-focused brands

The court's confirmation that Meta's AI training includes children's data creates significant implications for marketing professionals working with family-oriented brands or youth-adjacent content. Platform targeting and optimization systems now incorporate children's information despite announced restrictions, affecting advertising delivery and audience analysis.

Consumer opposition to AI training reaches 76% among women, who often serve as primary decision-makers for family-related purchases and child-focused products. This demographic resistance combined with confirmed children's data processing creates reputational risks for brands advertising on Meta platforms.

Marketing teams must consider ethical implications of using advertising systems powered by AI models that process children's personal information without consent. Brand safety considerations now extend beyond content placement to underlying data processing practices that affect vulnerable populations.

Privacy advocates pursue child-focused enforcement

Privacy organization noyb has indicated plans for potential class action lawsuits focusing on children's data processing violations. Under GDPR provisions, parents can claim non-material damages on behalf of affected children, potentially creating substantial financial exposure for Meta's AI training program.

With millions of European children affected by AI training data processing, individual damage claims could create significant enforcement pressure. Privacy advocates emphasize that children deserve enhanced protection from commercial data processing, particularly when used for AI development that benefits primarily adult users and commercial interests.

The foundation that brought this case operates specifically to protect "consumers and minors" when using online services within the EU, highlighting organizational focus on vulnerable population protection that extends beyond general consumer advocacy.

Technical architecture enables systematic child surveillance

Meta's AI training architecture creates comprehensive surveillance systems that particularly affect children whose personal information appears across multiple adult accounts. Family members, educators, and community members contribute overlapping data points about the same children through different social media profiles.

Cross-platform data combination between Facebook and Instagram enables detailed profiling of children's activities, relationships, and personal characteristics drawn from numerous adult sources. The court found this data combination lawful under current interpretations, despite creating extensive commercial datasets about minors.

Children face unique vulnerability because their personal information appears in multiple contexts without their knowledge or control. Birthday parties, school events, medical appointments, and daily activities become training data for commercial AI systems designed to optimize adult user engagement and advertising revenue.

Future regulatory framework development needed

The court's ruling demonstrates urgent need for enhanced regulatory frameworks specifically protecting children's digital privacy rights from AI development practices. Current legal mechanisms prove inadequate for addressing systematic processing of children's data by major technology platforms.

European authorities must develop coordinated enforcement approaches that prioritize child protection over commercial AI development interests. Enhanced age verification, parental consent mechanisms, and technical safeguards require regulatory mandates rather than voluntary industry implementation.

Children's digital rights advocacy organizations emphasize need for proactive protection rather than reactive enforcement that allows continued privacy violations while legal challenges develop through lengthy court procedures.

Timeline

  • April 14, 2025: Meta announces AI training program, claiming exclusion of under-18 content while processing adult posts containing children's data
  • April 17-19, 2025: Meta notifies users about processing "accounts whose owners are at least 18 years of age" without addressing children's data in adult content
  • May 27, 2025: Meta begins AI training with confirmed inclusion of children's personal data from public posts and institutional accounts
  • June 18, 2025: University research confirms AI models memorize personal data, including children's information, creating ongoing privacy obligations
  • August 5, 2025: Hamburg Data Protection Authority confirms ongoing GDPR violations affecting children's data during court hearing
  • August 12, 2025: Schleswig-Holstein court acknowledges children's data processing violations while dismissing injunction on procedural grounds

PPC Land explains

Personal Data: Under GDPR Article 4(1), personal data encompasses any information relating to an identified or identifiable natural person, including children's names, images, behavioral patterns, and contextual information. The court confirmed that when children's information appears in adult-posted content on Meta platforms, this constitutes personal data processing regardless of the child's awareness or consent. This definition extends beyond obvious identifiers to include indirect information that could be linked to specific children through family relationships, school attendance, or community participation documented in social media posts.

AI Training: The process of feeding large datasets into machine learning algorithms to develop artificial intelligence capabilities for content recommendation, user interaction, and commercial applications. Meta's AI training specifically involves processing public posts, comments, and images from Facebook and Instagram to enhance AI models including Meta AI and Llama systems. The court found this training inevitably captures children's personal information when adults share family content, educational activities, or community events featuring minors, creating commercial datasets from children's private lives.

De-identification: Technical measures intended to remove or obscure personally identifiable information from datasets while preserving data utility for analysis purposes. Meta implemented de-identification alongside tokenization as protective measures for AI training data processing. However, the court determined these techniques "did not effectively prevent the inclusion of personal data of unregistered users, children, adolescents, or sensitive personal data" because de-identification can be reversed through AI model reprogramming or dataset evaluation, leaving children's information accessible within commercial AI systems.

GDPR Article 9: European data protection regulation governing processing of special categories of personal data including health information, political opinions, religious beliefs, and biometric data. The court found Meta's processing of children's sensitive data through public posts "typically violates Art. 9 GDPR and cannot be justified under Legitimate Interest." This violation occurs when adults share information about children's health conditions, educational needs, family circumstances, or other sensitive categories without the children's explicit consent, making such processing unlawful under European privacy law.

Legitimate Interest: Legal basis under GDPR Article 6(1)(f) allowing data processing without explicit consent when controller interests outweigh data subjects' fundamental rights and reasonable expectations. Meta claimed legitimate interest for AI training purposes, but the court found this justification inadequate for processing children's sensitive personal data. The legitimate interest assessment requires balancing commercial AI development benefits against children's heightened privacy rights, with the court suggesting this balance favors protecting vulnerable minors over commercial technology advancement.

Data Processing: Any operation performed on personal data including collection, recording, organization, structuring, storage, adaptation, retrieval, consultation, use, disclosure, or combination. The court confirmed that Meta's AI training constitutes data processing under GDPR definitions, affecting children whenever their information appears in adult-posted content. This broad definition means every interaction with children's data within AI training systems requires legal justification, creating ongoing compliance obligations that extend throughout AI model development, deployment, and commercial utilization phases.

Institutional Profiles: Social media accounts operated by organizations such as schools, kindergartens, youth sports teams, and community groups that frequently feature children's information and activities. The court found these profiles particularly problematic because adult account managers share content about children without individual consent, making institutional posts subject to AI training despite involving minors. Educational institutions face specific exposure because classroom activities, student achievements, and school events create extensive datasets of children's personal information that enters commercial AI systems through institutional social media presence.

Hamburg Data Protection Authority: German regional supervisory authority that confirmed ongoing GDPR violations in Meta's AI training program affecting children's data processing. The authority specifically identified violations "particularly concerning the data of unregistered users included in public contributions, comments, and images" since April 14, 2025. This confirmation provided crucial evidence that children's privacy rights face active violation through Meta's AI training, despite the court's inability to provide emergency relief due to procedural timing requirements affecting the injunction request.

Tokenization: Technical process converting sensitive data into non-sensitive tokens that can be processed while theoretically protecting original information from unauthorized access. Meta implemented tokenization alongside de-identification as protective measures for children's data within AI training datasets. However, the court found tokenization insufficient because "de-identification and tokenization could be reversed in certain scenarios, such as reprogramming the AI model's output prompt or evaluating the learning dataset," meaning children's original personal information remains accessible within AI systems despite protective tokenization measures.

Preliminary Injunction: Emergency legal remedy seeking immediate halt to allegedly harmful activities pending full legal proceedings, requiring demonstration of urgency and likelihood of success. The Dutch foundation sought preliminary injunction to stop Meta's AI training processing of children's data, but the court dismissed the request due to insufficient urgency after waiting over two months following Meta's April announcement. This procedural barrier left children's data processing violations unaddressed despite court acknowledgment of likely GDPR violations, demonstrating how emergency legal procedures can fail to protect vulnerable populations from commercial technology harms.

Summary

Who: Schleswig-Holstein Higher Regional Court ruled on a case involving a Dutch charitable foundation protecting consumers and minors against Meta Platforms Ireland Limited's AI training program affecting European children.

What: The court confirmed Meta's AI training inevitably processes children's personal data from public posts, institutional accounts, and family content, despite claimed protective measures and age restrictions.

When: The August 12, 2025 ruling addressed AI training that began May 27, 2025, with authorities confirming ongoing violations of children's privacy rights since April 14 announcement.

Where: The decision affects children across European Union member states where Meta operates Facebook and Instagram, with processing coordinated from Ireland under GDPR's one-stop-shop mechanism.

Why: Meta's AI systems cannot distinguish children's personal information within adult-posted content, creating systematic processing of minors' data for commercial AI development without consent or adequate protection mechanisms.