A Dutch court this week issued an injunction prohibiting xAI's Grok from generating and distributing non-consensual nude imagery and child sexual abuse material in the Netherlands, imposing daily fines of €100,000 on X.AI, X Corp, and X Internet Unlimited Company for each day of non-compliance, capped at €10,000,000 per entity. The ruling, handed down by the Amsterdam District Court's preliminary relief judge on March 26, 2026, under case number C/13/783613 / KG ZA 26-120, marks a significant legal checkpoint in an escalating chain of events that began when xAI launched Grok's image-editing feature on December 29, 2025.

The plaintiff, Stichting Offlimits, is an Amsterdam-based nonprofit dedicated to preventing and combating online sexual abuse. Represented by attorneys O.M.B.J. Volgenant and K. Han, the foundation filed for expedited proceedings on February 19, 2026, after a demand letter sent to the defendants on February 4, 2026 initially received no substantive response.

The feature that triggered the crisis

According to the court judgment, on or around December 29, 2025, the image-editing feature was announced, allowing X users to modify photos already posted on the platform using Grok's generative capabilities. The feature was accessible through the Grok-in-X function - an integration enabling users to interact with Grok directly via the @grok account on X - as well as through the standalone Grok app on iOS and Android, and the website grok.com. Users could select "Edit image with Grok" on any image encountered on X, after which they were redirected to the Grok interface.

The results were swift and documented. The nonprofit Center for Countering Digital Hate (CCDH) published an analysis on January 22, 2026 estimating that, in the 11-day window between December 29, 2025, and January 8, 2026, Grok generated approximately 3 million sexualized images. Of those, an estimated 23,338 appeared to depict children. The CCDH's methodology involved sampling 20,000 images, of which 12,995 (65%) were identified as sexualized content depicting adults or children, and 101 (0.5%) were assessed as likely depicting minors in a sexualized context - a figure then extrapolated against an estimated total of 4.6 million Grok-produced images posted to X during the period.

According to the CCDH publication, cited in the court ruling: "The AI tool Grok is estimated to have generated approximately 3 million sexualized images, including 23,000 that appear to depict children, following the launch of a new image-editing feature powered by the tool on X."

On January 9, 2026, X restricted image-generation functionality to paid Grok users, citing widespread public criticism of the feature's misuse. Five days later, on January 14, 2026, the company announced technical measures designed to prevent the Grok account on X from editing images to show real people in revealing clothing, such as bikinis. Despite this announcement, The Guardian, in an article from January 16, 2026 submitted as evidence by Offlimits, reported being able to create short videos of real women being stripped to bikinis using publicly available photographs - and to publish those videos on X without any apparent moderation.

Regulatory pressure builds

The events drew immediate scrutiny from European authorities. On January 26, 2026, the European Commission announced a formal investigation into X under the Digital Services Act, examining whether X had adequately assessed and mitigated risks related to the deployment of Grok's features within the EU. The investigation focused specifically on the dissemination of illegal content, including manipulated sexually explicit images and potential child sexual abuse material. In its press release of that date, the Commission stated that "these risks appear to have materialized, exposing EU citizens to serious harm."

That investigation built on proceedings against X that the Commission had launched in December 2023, initially unrelated to Grok. The January 2026 probe extended those proceedings to encompass Grok's integration into the platform's recommendation architecture and content generation capabilities.

What xAI claimed it had done

In a letter dated March 6, 2026, attorneys for the defendants outlined the safeguards xAI said had been implemented. According to that letter, quoted in the court documents: "Our Clients categorically reject any suggestion that the current image generation functionality of the Grok-in-X feature available at https://x.com/i/grok, the chatbot available at Grok.com, and/or the @grok account available at x.com/grok permits the generation of (i) non-consensual intimate imagery of real, identifiable persons and/or (ii) CSAM."

The defendants described two waves of technical measures. On or around January 4, 2026, xAI said it implemented input filters designed to prevent Grok from responding to prompts seeking to generate sexualized content involving minors. These filters, according to xAI, reject specific classes of sensitive requests, including those related to child sexual abuse material. On January 20, 2026, xAI said it further strengthened existing safeguards to prevent nude image generation and the modification of existing images. Generating images of real people in sexualized clothing was, at that point, "further restricted." A third element involved restricting image generation on the @grok account to users with an active X Premium subscription, eliminating anonymous access to the feature.

These measures applied, xAI said, to the Grok-in-X function and to the chatbot available via grok.com. The standalone app's status under these measures was less clearly established in the court record.

What Offlimits showed the court could still happen

The defendants' assurances did not survive judicial scrutiny. Offlimits submitted Exhibit 28 - a list of screenshots - documenting that on March 9, 2026, the same day that a meeting was held between the parties, it remained possible to generate a video using Grok based on a photograph of a real person, placing that person in a sexualized context. The Dutch prompt used in the test is reproduced in the court judgment, instructing Grok to produce a slow-motion video of a man stripping, with specific visual details, and to generate an AI image based on the output.

According to the court, Offlimits stated - without contradiction from the defendants - that "the resulting video was generated without Grok verifying whether consent had been obtained from the data subject, whose face appears in the video." The court found it plausible that Grok continued to facilitate the generation of non-consensual undressing images. As noted in Offlimits' letter of March 11, 2026 to the defendants: "In reality, it is still easy to generate illegal visual content, whether using a free Grok account or a paid Grok account. Grok does not verify whether the person appearing in the images and undressed has given explicit consent to do so. Grok does not verify whether the person depicted in the visual material is of legal age."

The court found this situation particularly striking given that xAI's own legal representatives had, in writing that very morning, categorically denied that any such content could be generated.

Offlimits also submitted images generated on March 9, 2026, of a girl it described as approximately 14 years old. The court acknowledged that determining the legal status of those images was more complex - one showed a young female figure with a deep cleavage and a semi-transparent top, but explicit nudity was absent and the character was fictional. The court left the precise legal classification open, but concluded that the images demonstrated users could "push the limits" during image generation, with the question of whether output constitutes child pornography depending on context.

The defendants admitted in court that they had not tested the use of underage ages in prompts, citing the fact that doing so would itself be prohibited. The court found this inconsistent with the categorical claim that generating child sexual abuse material via prompts was impossible.

The court based its analysis on two distinct legal grounds. For non-consensual undressing imagery, it relied on the General Data Protection Regulation (GDPR), specifically Article 79(2), which permits proceedings to be brought in the member state where the data subject habitually resides. The processing of personal data - specifically the face of a real, identifiable person - in the generation of sexualized content without consent was found to constitute an unlawful interference with privacy rights. X Internet Unlimited Company (XIUC), incorporated under Irish law in Dublin, operates the X platform in the European Economic Area and the United Kingdom. X.AI's European Privacy Policy designates the company as data controller for Netherlands-resident users.

For child sexual abuse material, the court applied Dutch tort law under Article 6:162 of the Dutch Civil Code, finding that facilitating the generation of such material violates unwritten norms of social conduct and unlawfully contributes to a climate of online abuse. Jurisdiction over the US-based entities was established through Article 7 of the Dutch Code of Civil Procedure, which allows joint proceedings where the claims share sufficient connection to justify consolidated treatment.

The court also addressed the standing of Offlimits to bring the action. Under Article 3:305a of the Dutch Civil Code, a foundation can bring representative legal claims where it pursues an idealistic purpose set out in its articles of association. The court applied the "light regime" under paragraph 6 of that article, applicable when the action concerns a matter of public interest. Offlimits does not pursue damages; its claims are structural - compelling operational changes from the defendants.

What the injunction requires

The injunction issued today covers three defendants. X.AI LLC, based in Palo Alto, California, is prohibited from generating and distributing sexual imagery that uses functionality to undress persons without their explicit consent, insofar as those persons reside in the Netherlands. It is separately prohibited from producing, distributing, or displaying material qualifying as child pornography under Dutch law. X.AI is also required to confirm in writing to Offlimits how it has complied with both prohibitions.

X Corp, based in Bastrop, Texas, is prohibited from offering Grok functionality as part of the X platform for as long as Grok acts in violation of the first prohibition. Because X Corp does not directly offer services in the Netherlands - that role falls to XIUC under EEA jurisdiction - the court limited X Corp's injunction to the non-consensual nudity prohibition only, not the child pornography element. The international jurisdictional basis for ordering X Corp not to enable the dissemination of images of fictional persons outside the Netherlands did not exist, the court found.

XIUC, incorporated in Dublin and responsible for X's EEA operations, is prohibited from offering Grok functionality on the X platform for as long as Grok violates either prohibition. Unlike X Corp, XIUC's injunction covers both the undressing imagery and the child pornography components, because XIUC is directly involved in providing X within the Netherlands.

Each defendant faces a daily fine of €100,000 for non-compliance, with each entity's maximum penalty capped at €10,000,000. The court noted that if xAI and XIUC truly ensure unlawful content cannot be generated or distributed via Grok or X - as they themselves claimed - they would incur no fines.

The defendants were jointly and severally ordered to pay Offlimits' litigation costs of €2,226.57, itemised as €125.57 for summons costs, €735 in court filing fees, €1,177 in attorneys' fees, and €189 in additional costs.

Why this matters for advertisers and platform operators

The ruling arrives at a moment when the relationship between Grok and X's advertising proposition is under particular scrutiny. As covered by PPC Land, X this year circulated a Brand Suitability Playbook to advertisers positioning Grok as the core of its brand safety infrastructure - a pre-bid and adjacency scoring engine claiming 99%+ brand safety scores and 97%+ brand suitability scores. Integral Ad Science extended brand safety measurement to X profiles in January 2026, the same period that the Grok image-editing controversy peaked.

The contradiction between Grok's role as a content moderation tool and its documented generation of prohibited content has been a persistent problem for X's advertiser recovery narrative. X's annual ad revenue fell from $2.43 billion in 2021 to an estimated $1.25 billion in 2025, according to Emarketer projections, following the mass departure of advertisers after Elon Musk's $44 billion acquisition. Brands avoiding X for brand safety reasons have in some cases been purchasing X inventory inadvertently through Google Ads programmatic extensions, according to consultant Jonathan D'Souza-Rauto, who flagged the issue in January 2026.

The Dutch ruling adds a formal legal dimension to what had been primarily a reputational and commercial problem for X. A court has now found, under preliminary relief standards, sufficient grounds to conclude that Grok continues to enable the generation of content that violates both GDPR and Dutch tort law. The judgment does not constitute a final determination of liability - preliminary relief proceedings in the Netherlands are summary in nature - but the injunctions are immediately enforceable.

For platform operators across Europe, the case illustrates how generative AI features integrated into social media platforms can attract simultaneous regulatory scrutiny under multiple frameworks: the GDPR for personal data processing, national tort law for harm facilitation, and the DSA for systemic risk assessment obligations. The European Commission's January 26, 2026 DSA investigation into X specifically assessed whether the platform complied with Articles 34 and 35 of the DSA, which require very large online platforms to conduct systemic risk assessments and implement proportionate mitigation measures before deploying features with significant impact on their risk profile.

Article 42 of the DSA further requires platforms to notify the Commission before deploying functionalities that may have a critical impact on their systemic risk profile. Whether xAI fulfilled those notification and assessment obligations before launching the image-editing feature on December 29, 2025, is a question that remains open in the Commission's proceedings.

Timeline

  • December 9, 2024 - xAI integrates the Aurora image generation model into Grok, expanding beyond text capabilities (PPC Land)
  • December 14, 2024 - Grok becomes available for free to all X users, with usage limits, following a period of premium-only access
  • December 25, 2025 - Grok generates prohibited sexualized images of minors, bypassing safety filters through specific prompts
  • December 29, 2025 - xAI launches image-editing feature on X, allowing users to modify photos using Grok
  • January 4, 2026 - xAI implements additional input filters to prevent generation of sexualized content involving minors
  • January 9, 2026 - Image-generation functionality restricted to paid Grok users following widespread criticism
  • January 14, 2026 - X announces technical measures to prevent the @grok account from editing images to show real people in revealing clothing
  • January 16, 2026 - The Guardian reports that despite January 14 measures, the creation of sexualized videos of real women was still possible
  • January 20, 2026 - xAI strengthens safeguards against nude image generation; IAS extends brand safety measurement to X profiles
  • January 22, 2026 - CCDH publishes analysis estimating 3 million sexualized images generated between December 29, 2025 and January 8, 2026, including approximately 23,338 depicting children
  • January 26, 2026 - European Commission launches formal DSA investigation into X over Grok deployment
  • February 4, 2026 - Offlimits sends formal demand letter to xAI and X
  • February 19, 2026 - Offlimits files for expedited preliminary relief proceedings with Amsterdam District Court
  • February 24, 2026 - Summonses served on defendants
  • March 6, 2026 - Defendants' attorneys respond, categorically denying that Grok can generate prohibited content
  • March 9, 2026 - Meeting between parties; Offlimits generates prohibited video using Grok the same day
  • March 11, 2026 - Offlimits sends screenshots of March 9 Grok outputs to defendants as evidence
  • March 12, 2026 - Oral hearing at Amsterdam District Court
  • March 26, 2026 - Amsterdam District Court issues injunction prohibiting Grok and X from generating non-consensual nude imagery and child sexual abuse material; €100,000 daily fines imposed, capped at €10,000,000

Summary

Who: Stichting Offlimits, an Amsterdam-based nonprofit dedicated to combating online sexual abuse, filed proceedings against X.AI LLC (Palo Alto, California), X Corp (Bastrop, Texas), and X Internet Unlimited Company (Dublin, Ireland) - the three legal entities operating Grok and the X platform.

What: The Amsterdam District Court today issued a preliminary injunction prohibiting Grok from generating and distributing non-consensual nude imagery and child sexual abuse material in the Netherlands, and prohibiting X and XIUC from offering Grok as part of the X platform while those violations continue. Daily fines of €100,000 per defendant, capped at €10,000,000 each, apply for non-compliance.

When: The underlying events began on December 29, 2025, when xAI launched the image-editing feature. Proceedings were filed February 19, 2026. The hearing took place March 12, 2026. The judgment was issued today, March 26, 2026.

Where: The proceedings were heard at the Amsterdam District Court. The injunction applies to images of persons residing in the Netherlands and to the production and distribution of such images in the Netherlands. X Internet Unlimited Company's injunction, as the EEA operator, covers both prohibition categories; X Corp's is limited to non-consensual undressing imagery under Dutch GDPR jurisdiction.

Why: Offlimits demonstrated that, despite repeated assurances from xAI that robust safeguards were in place, it remained possible on March 9, 2026 - the day of the pre-hearing mediation meeting - to generate a sexualized video of a real person using Grok without any consent verification. The court found sufficient grounds under the GDPR and Dutch tort law to conclude that Grok continued to facilitate non-consensual image generation, and that the defendants' categorical denials were inconsistent with the documented technical reality.

Share this article
The link has been copied!