Irish DPC launches Grok LLM training inquiry
Ireland's data watchdog investigates X's use of EU user data for AI training, examining GDPR compliance in the emerging field of generative AI data sourcing.

The Data Protection Commission (DPC) of Ireland has initiated a formal inquiry into X Internet Unlimited Company (XIUC) regarding its processing of personal data from EU and EEA users' public posts on the X platform for training artificial intelligence models. The announcement, made just two days ago on April 11, 2025, specifically targets the company's use of such data to train the Grok Large Language Models (LLMs).
According to the DPC's official statement, "The Data Protection Commission has today announced the commencement of an inquiry into the processing of personal data comprised in publicly-accessible posts posted on the 'X' social media platform by EU/EEA users, for the purposes of training generative artificial intelligence models, in particular the Grok Large Language Models (LLMs)."
The inquiry, conducted under Section 110 of the Data Protection Act 2018, will scrutinize XIUC's compliance with key provisions of the General Data Protection Regulation (GDPR), with particular attention to the lawfulness and transparency of the data processing activities. This development marks one of the first major regulatory investigations into the use of social media content for AI training in the European Union.
Grok, developed by xAI, represents a significant AI initiative connected to the X platform. The DPC describes Grok as "the name of a group of AI models developed by xAI," noting that "These Large Language Models are used, among other things, to power a generative AI querying tool/Chabot, which is available on the X platform." The regulatory body acknowledges that like other modern LLMs, the Grok models have been "developed and trained on a wide variety of data."
The investigation's scope focuses specifically on a subset of training data controlled by XIUC—the personal data contained within publicly accessible posts made by users in the European Union and European Economic Area. The central question at the heart of this inquiry is straightforward yet profound: was this personal data lawfully processed for the purpose of training the Grok LLMs?
Dr. Des Hogan and Dale Sunderland, the Commissioners for Data Protection who authorized the inquiry, officially notified XIUC of the investigation earlier this week. The timing of this inquiry is particularly noteworthy as it comes just weeks after XIUC formally changed its corporate identity. The DPC notes that "XIUC advised the DPC on 25 March 2025 that, from 1 April 2025, the name of the Irish entity and Data Controller for EU users of X would change from Twitter International Unlimited Company ('TIUC') to X Internet Unlimited Company as part of ongoing Twitter to X rebranding efforts."
This investigation highlights the increasing regulatory attention being directed toward the data harvesting practices that underpin generative AI technologies. While companies have rushed to develop and deploy increasingly sophisticated AI models, questions about the provenance and permissible uses of training data have lagged behind technological advancement.
The case raises fundamental questions about the status of public social media posts under data protection law. Though users publish such content openly, European regulations maintain that personal data within these posts retains certain protections. The inquiry will likely establish important precedents regarding what constitutes appropriate notice and legal basis for processing such data for AI training purposes.
For XIUC, the stakes are considerable. GDPR violations can result in significant penalties, with maximum fines reaching €20 million or 4% of global annual revenue, whichever is higher. Beyond potential financial penalties, the outcome could necessitate changes to how Grok models are trained and potentially even deployed within the European market.
The investigation also arrives at a critical juncture for the broader AI industry. Major AI developers have faced increasing scrutiny regarding their training data practices, with several class-action lawsuits already filed in various jurisdictions alleging copyright infringement and unauthorized use of personal data. This regulatory action by the DPC may signal a more interventionist approach from European data protection authorities in the AI space.
This case can establish important benchmarks for compliance in AI development. The DPC's findings may clarify whether publicly available social media content can be legally repurposed for commercial AI training without explicit user consent—a question with far-reaching implications for the entire generative AI sector.
Marketing professionals should pay particular attention to this inquiry as it directly impacts how companies can leverage user-generated content for developing AI tools and services. The case could potentially redefine the boundaries of permissible data usage in digital marketing technologies that increasingly incorporate AI capabilities.
The investigation comes amid broader international discussions about appropriate governance frameworks for AI development. The EU's AI Act, which recently came into force, attempts to establish risk-based regulatory categories for different AI applications, though many specific requirements remain to be defined through implementing acts and guidance.
For marketing professionals, the investigation highlights the critical importance of data governance in AI initiatives. Companies developing or utilizing generative AI tools must carefully evaluate their data sourcing and processing activities, particularly when incorporating content that may contain personal information.
This case also underscores the expanding definition of personal data under European regulation. While traditional personal identifiers like names and email addresses are clearly protected, the DPC's approach suggests that opinions, perspectives, and other content authored by individuals may similarly fall within regulatory scope when used for AI training.
The inquiry's findings could potentially influence how marketing departments approach data-driven personalization. If stringent consent requirements are imposed for AI training, companies may need to revise their data collection practices and privacy notices to explicitly address possible AI uses of customer content.
Marketing technology vendors face particular exposure to the outcome of this investigation. Those offering AI-enhanced solutions for content generation, customer service, or predictive analytics may need to reexamine their data processing practices to ensure compliance with any standards established through this regulatory action.
The case also highlights regional differences in data protection standards that continue to challenge global marketing operations. While European regulations impose strict requirements for processing personal data, other major markets maintain different approaches—creating complexity for international campaigns utilizing AI technologies.
Competitors in the social media space will closely monitor this investigation for its potential competitive impacts. If XIUC faces restrictions on how it can utilize user data for AI development, this could potentially affect the platform's ability to deliver certain AI-enhanced features in the European market.
The matter further emphasizes the importance of robust privacy impact assessments when deploying new technologies. Marketing departments implementing AI solutions should carefully document their data processing activities and evaluate potential risks to individual rights, particularly when repurposing existing data for machine learning applications.
Beyond the immediate parties involved, this investigation reflects broader societal negotiations about appropriate boundaries for artificial intelligence. As generative AI technologies increasingly produce human-like content and interactions, their development processes face heightened ethical and legal scrutiny.
Further developments in this case are expected in the coming months as the DPC conducts its investigation. The outcome will likely influence not only XIUC's operations but also establish important precedents for how European data protection law applies to generative AI technologies more broadly.
Timeline
- March 25, 2025: XIUC advises the DPC that the Irish entity and Data Controller for EU users of X would change from Twitter International Unlimited Company to X Internet Unlimited Company effective April 1, 2025
- April 1, 2025: Official name change from Twitter International Unlimited Company to X Internet Unlimited Company takes effect
- April 11, 2025: The DPC announces the commencement of an inquiry into XIUC's processing of EU/EEA users' personal data for training Grok LLMs
- April 11, 2025: Dr. Des Hogan and Dale Sunderland, Commissioners for Data Protection, notify XIUC of the inquiry
- April 13, 2025: The investigation continues with findings expected in the coming months