On April 3, 2026, online service providers in the European Union lost the legal basis that allowed them to detect and remove child sexual abuse material on their platforms. The derogation under the ePrivacy Directive, which had granted companies a temporary exemption to scan user communications for such content, expired without a replacement framework in place. Four major technology companies - Google, Meta, Microsoft, and Snap - responded the following day by issuing a joint statement pledging to continue voluntary detection efforts regardless of the legislative gap.
The timing is blunt. April 1, 2026 was the date a coalition of 247 organisations working on children's rights and ending sexual abuse published a joint statement condemning EU policymakers for their failure to act. "This is sadly no April Fool's joke," the coalition wrote, describing the expiry as creating "a deeply alarming and irresponsible gap in child protection."
What the derogation allowed
The ePrivacy derogation gave platforms legal cover to deploy hash-matching technology and other automated tools to proactively scan content on their networks, compare it against databases of known child sexual abuse material, and report confirmed matches to law enforcement and organisations such as the National Center for Missing and Exploited Children in the United States. Without it, the legal basis for such scanning in the EU becomes unclear, particularly as the ePrivacy Directive restricts the confidentiality of electronic communications.
Hash-matching works by converting images into unique numerical fingerprints - called hashes - that can be compared against a database of known abusive content without a human reviewer needing to view the original material. According to NCMEC's Senior Vice President and Chief Operating Officer Michelle DeLaune, the organisation received 21 million CyberTipline reports in 2020 alone, representing close to 70 million individual images and videos. The sheer volume made human review of every file impossible, and hash-matching allowed platforms to flag known material automatically.
Google is the largest single contributor to NCMEC's industry hash-sharing platform, according to DeLaune, accounting for approximately 74% of the total number of hashes on the list. The company also developed a Hash Matching API that NCMEC uses to identify visually similar - not just identical - images, enabling the organisation to tag more than 26 million images and prioritise never-before-seen content that may depict children being actively abused. In the words of DeLaune, that needle in the haystack "is a child who needs to be rescued."
The joint statement from Google, Meta, Microsoft, and Snap
The April 4 statement, posted on Google's official blog, was signed by all four companies and acknowledged the difficulty the expiry creates. "While EU institutions rightly expect technology companies to take action on child safety, the April 3 expiry of the derogation clouds the legal certainty that has helped responsible platforms try to protect our communities, safeguard child victims, and preserve the integrity of our services," according to the statement. The companies described the failure to reach an agreement as "irresponsible" and said they were "disappointed."
The signatories committed to continuing "voluntary action on our relevant Interpersonal Communication Services" and called on EU institutions "to conclude negotiations on a regulatory framework as a matter of urgency." They also announced a webinar scheduled for April 10, 2026, at 3PM CET, to explain how hash-matching and CSAM detection tools work.
Emily C., identified on LinkedIn as Child Safety Public Policy Lead at Google, posted the same day: "As the EU ePrivacy derogation expires tonight, [Google] is reaffirming our commitment to protecting children and preserving privacy, and will continue to take voluntary action to combat CSAM - despite EU institutions' failure to act."
247 organisations, one message
The civil society coalition that published its statement on April 1 is extensive. It spans 247 organisations across dozens of countries, including ECPAT International, the Internet Watch Foundation, Save the Children (Romania, Denmark, Italy, Finland), Missing Children Europe, INHOPE, the National Center for Missing & Exploited Children, WeProtect Global Alliance, the Canadian Centre for Child Protection, and SOS Children's Villages International, among many others.
Their statement warns that the consequences of allowing the legal basis to lapse "will be devastating - in Europe and beyond." The coalition pointed to a previous lapse in the framework in 2021, during which reports of child sexual abuse material dropped dramatically. "Law enforcement will lose critical leads to uncover sexual abuse cases and children will remain trapped in abusive situations," the statement reads. "Meanwhile, abusive content will continue to spread unchecked, forcing victims to relive their trauma each time it is viewed or shared."
The coalition's central demand was unambiguous: "We call on EU policymakers to act with urgency and responsibility by adopting, without delay, an ambitious and permanent legal framework."
The detection debate is not new - and not simple
The expiry lands in a context that has been contested for years. The Electronic Frontier Foundation has argued that automated CSAM scanning systems are far from infallible. An EFF report from August 2022 documented two cases in which Google's algorithms incorrectly flagged photos taken by fathers of their young children who had genital infections - photographs taken at a doctor's request. Google reported both fathers to law enforcement without informing them. Both the Houston Police Department and the San Francisco Police Department quickly cleared the men of any wrongdoing. One of the fathers, identified as Mark, was never able to have his Google account restored even after providing documentation showing the SFPD had determined there was "no crime committed."
The EFF noted that a Facebook study on 150 accounts reported to authorities for alleged CSAM found that 75% of those accounts had sent images that were "non-malicious." LinkedIn found 75 accounts reported to EU authorities in the second half of 2021, but upon manual review, only 31 involved confirmed CSAM. These figures raise questions about false positive rates that can result in real harm to innocent people - particularly those in countries where law enforcement or community norms may not afford the same due process protections as in wealthy democracies.
The EFF also pointed to the EU's own stated ambitions at the time of earlier proposals. Former EU Commissioner Ylva Johansson claimed the scanners proposed in a 2022 EU draft had accuracy rates "significantly above 90%." For systems scanning billions of private messages, an accuracy rate in that range would still generate millions of false positives. Critics argued this created what the EFF described as "bugs in our pockets."
Apple's experience illustrates how contentious the terrain is. In August 2021, Apple announced a client-side scanning system that would check iCloud photos against CSAM hashes before upload, a design intended to preserve end-to-end encryption. According to WIRED's reporting, Apple paused the project that September following an outcry from digital rights groups and researchers who warned the tool could be exploited to compromise privacy at scale. By December 2022, Apple confirmed it had abandoned the project entirely. A child safety group called Heat Initiative subsequently organised a campaign demanding Apple "detect, report, and remove" CSAM from iCloud and offer users tools to report it directly. Apple issued a rare, detailed response but maintained its decision.
What the marketing and ad tech industry needs to understand
For digital advertising professionals, this EU regulatory gap has direct operational relevance. Platforms operating in Europe face real ambiguity about which detection activities are now legally permissible, which in turn affects how their trust and safety functions interact with advertising and communications products. Google's child safety advertising policy changes in August 2025 introduced immediate account suspension for violations involving child sexual abuse and exploitation, eliminating the standard seven-day warning period. The policy, which took effect in October 2025, renamed the category from "Child Sexual Abuse Imagery" to "Child Sexual Abuse and Exploitation" and expanded coverage to all content formats including audio, text, and AI-generated material.
That enforcement posture now sits alongside an uncertain legal environment in Europe. Platforms that wish to continue proactive scanning on EU users' communications will need to assess whether their activities fall under other legal bases - such as legitimate interest under GDPR - or whether they must pause certain detection activities while the EU negotiates. The four companies behind the April 4 joint statement appear to have decided to continue regardless, accepting the legal risk. What that means for smaller platforms with less legal resource is an open question.
The broader European policy landscape has been under significant pressure. The EU scrapped the ePrivacy Regulation in February 2025 after eight years of unsuccessful negotiations, citing an outdated technological and legislative context. The European Commission has separately been developing a mandatory age verification framework for digital services tied to the Digital Services Act, with full implementation targeted across all EU member states by the end of 2026. A prototype age verification app drew significant criticism from privacy advocates for its dependency on Google's Play Integrity API. The European data regulator detailed new age verification principles in February 2025, requiring that any verification system be the least intrusive measure possible and must not enable additional tracking or profiling.
The intersection of child safety and privacy has also surfaced in the context of AI. A German court confirmed in August 2025 that Meta's AI training likely included children's data in violation of European privacy law, though procedural timing requirements prevented a protective injunction. The European Commission extended its investigation into X over its Grok AI tool in January 2026 after the system generated prohibited images of minors in December 2025.
Meanwhile, Google's child safety advertising enforcement has continued to expand. Its 2024 Ads Safety Report, published on April 16, 2025, showed the company suspended over 39.2 million advertiser accounts that year - a 208% increase from 12.7 million in 2023.
What happens next
The EU is under pressure to act fast. The coalition of 247 organisations warned that "every day without detection is another day children are left unprotected." According to NCMEC, there are more than 20,000 identified children whose sexual abuse has been documented in images and videos that continue to circulate online. These individuals, some now adults, are aware of the ongoing nature of their victimisation. The CyberTipline, which was receiving approximately 70,000 new reports per day by 2021, depends in large part on voluntary platform scanning to generate those reports.
NCMEC's data is instructive on the scale problem. In the early days of the CyberTipline, which launched in 1998, the organisation received perhaps 100 reports of child exploitation per week. The first report from a tech company arrived in 2001. By 2021, 70,000 reports per day were coming in - the majority from tech companies using proactive detection tools. The gap that opened on April 3 risks unwinding that infrastructure, at least within EU legal frameworks.
The four companies - Google, Meta, Microsoft, Snap - have staked a position. They will keep scanning. Whether that voluntary action will be sufficient, and whether smaller platforms will follow suit, remains to be seen. EU negotiators face pressure from multiple directions: child safety organisations, technology companies seeking legal clarity, and privacy advocates who have consistently warned that mass scanning of private communications poses risks that fall disproportionately on vulnerable groups.
Timeline
- 1998 - NCMEC launches the CyberTipline as a public reporting mechanism for child exploitation online
- 2001 - NCMEC receives its first CyberTipline report from a technology company
- 2021 (February) - Google's algorithms incorrectly flag photos taken by two fathers in the US, leading to police investigations; both men are cleared of wrongdoing
- 2021 - A previous lapse in the EU's legal framework causes reports of CSAM to drop dramatically, according to the coalition of 247 organisations
- 2021 (second half) - LinkedIn reports 75 accounts to EU authorities; only 31 involve confirmed CSAM upon manual review
- 2021 (August) - Apple announces a client-side iCloud photo scanning system for CSAM detection
- 2021 (September) - Apple pauses the scanning project following criticism from digital rights groups
- 2022 (August) - EFF publishes a report documenting Google's false CSAM accusations against two fathers and the company's refusal to restore their accounts
- 2022 (December) - Apple confirms it has abandoned its CSAM photo-scanning project
- 2023 (August) - Child safety group Heat Initiative announces a campaign pressuring Apple to detect and report CSAM from iCloud; Apple issues a detailed response maintaining its decision
- 2025 (February) - EU scraps the ePrivacy Regulation proposal after eight years of failed negotiations
- 2025 (February) - EDPB details new age verification principles requiring least-intrusive methods for digital services
- 2025 (July) - EU age verification app draws criticism for requiring Google's Play Integrity API on Android devices
- 2025 (July) - EU announces mandatory age verification implementation across all member states by end of 2026
- 2025 (August) - German court confirms Meta AI training likely includes children's data in violation of European privacy law
- 2025 (August) - Google announces expanded child safety advertising policy renaming CSAI to CSAE with immediate account suspension enforcement
- 2025 (October) - Google finalises expanded CSAE advertising policy enforcement, removing standard warning periods
- 2025 (December 25) - Grok generates prohibited images of minors; EU extends investigation into X under the DSA in January 2026
- 2026 (April 1) - A coalition of 247 organisations publishes a joint statement condemning EU policymakers' failure to extend the ePrivacy derogation for CSAM detection
- 2026 (April 3) - The EU ePrivacy derogation enabling CSAM detection by online service providers expires without a replacement framework
- 2026 (April 4) - Google, Meta, Microsoft, and Snap issue a joint statement pledging to continue voluntary CSAM detection and calling on the EU to act urgently
Summary
Who: Google, Meta, Microsoft, and Snap, alongside a coalition of 247 child rights organisations, and EU policymakers who failed to renew the ePrivacy derogation.
What: The EU's legal basis permitting online platforms to proactively detect and remove child sexual abuse material expired on April 3, 2026. Four major technology companies pledged to continue voluntary scanning using hash-matching tools, while a coalition of 247 organisations condemned the legislative failure and called for an urgent permanent framework.
When: The derogation expired on April 3, 2026. The coalition statement was published on April 1. The joint statement from the four technology companies was published on April 4. The events follow years of contested debate about CSAM scanning, including Apple's abandoned scanning project in 2021-2022 and documented cases of false accusations by Google's systems in 2021.
Where: The legal gap applies across the European Union's 27 member states. The technology companies involved operate globally, and the downstream effects on law enforcement leads and victim identification are expected to extend beyond EU borders, according to the organisations involved.
Why: EU institutions failed to reach agreement on a permanent legislative framework governing CSAM detection on online platforms before the temporary derogation expired. The debate involves a fundamental tension between child protection - which requires proactive scanning at scale to generate law enforcement leads and remove abusive content - and privacy rights, which limit blanket scanning of private communications. Both the scale of false positives documented in existing systems and the potential for such tools to be abused in authoritarian contexts fuel opposition to mandatory scanning mandates, while child safety advocates argue the human cost of inaction is immediate and measurable.