The Autoriteit Persoonsgegevens (AP), the Netherlands' national data protection authority, on February 12, 2026, issued a formal warning against the use of OpenClaw and similar open-source AI agent systems, citing what it described as major risks of data breaches, account takeovers, and unauthorised remote access to computer systems. The warning lands at a moment when autonomous AI agents are spreading rapidly across both consumer and enterprise environments - yet regulatory frameworks and security standards struggle to keep pace.
OpenClaw is an open-source platform that provides users with an AI assistant capable of executing tasks autonomously. To enable this, according to the AP, users grant the system full access to their computer and programmes, including email, files and online services. The AI can then perform tasks without requiring explicit prior consent from the user for each action. That architecture - compelling in its convenience - is precisely what makes the system dangerous, the regulator argues.
"The cybersecurity community regards this type of autonomous AI agent as a 'Trojan Horse'," the AP stated in its February 12 announcement, "as it is an attractive target for abuse."
What the security researchers found
The AP's warning draws on findings from security researchers worldwide. The picture they paint is detailed and specific. About one-fifth of all available plug-ins for OpenClaw appear to contain malware designed to steal login credentials or cryptocurrency assets. The platform is separately vulnerable to a class of attack known as indirect prompt injection, in which hidden commands are embedded inside seemingly ordinary websites, emails or instant messages. When the AI system processes those documents, it can be tricked into executing the attacker's instructions rather than the user's.
The consequences, according to the AP, can be severe. A successful indirect prompt injection attack could allow an attacker to acquire accounts from linked services such as Google, Facebook and Apple ID - effectively handing the attacker a master key to a wide range of personal data and connected platforms. The AI system could also be manipulated into reading emails, viewing calendar entries and accessing local files such as personal documents. More technically, attackers can steal API keys from AI model integrations, allowing them to hijack use of those AI services entirely.
Beyond these attack vectors, security researchers have identified critical vulnerabilities that go further still. According to the AP, an attacker can execute malicious commands or code remotely, without requiring any physical access to the target computer. The attacker can take full control of the system via OpenClaw or a similar agent, steal data or install malware. Separately, misconfiguration risks exist: OpenClaw allows users to install incorrect configurations that can expose personal data to public visibility without the user realising it.
The system runs locally on the user's computer. That detail might suggest it is insulated from external threats. The AP is explicit that this assumption is wrong. Running locally, it notes, does not automatically mean the system is secure. Without proper security and risk management, its use can lead to serious security incidents, data breaches and unauthorised access to personal data.
The call to users and organisations
The AP is calling on users and organisations not to use OpenClaw and similar AI agents on systems that hold privacy-sensitive or confidential data. The authority lists examples: access codes, financial records, employee data, private documents and identity documents. The scope of what the AP considers at-risk is broad. It is not just corporate IT departments the authority has in mind.
Parents are specifically asked to check whether their children have installed such systems on home devices. That particular call-out reflects awareness that consumer-facing AI tools have moved well beyond technical users, reaching households where the risks are poorly understood.
For those who continue to use AI agent platforms, the AP advises caution with external plug-ins, strict application of access controls, and the immediate renewal of login credentials and API keys in any case where exposure is suspected. The regulator's framing is unambiguous: organisations and individual users remain responsible for compliance with the General Data Protection Regulation regardless of whether the software they use is open-source or commercially developed. According to the AP, "innovation and open source do not discharge the obligation to limit risks in advance."
GDPR liability remains unchanged
That last point carries legal weight. The GDPR, which applies across all EU member states, imposes obligations on data controllers and processors regardless of the tools they use to process data. Choosing open-source software does not transfer or remove accountability. An organisation that deploys OpenClaw to process employee records or customer information and subsequently suffers a breach would be assessed on whether it took appropriate technical and organisational measures - not on whether the offending software was proprietary or free.
The AP has been active on AI-related privacy issues well before this warning. The authority previously set out GDPR preconditions for generative AI in a May 2025 consultation covering training data requirements, data subject rights, and the need for clear purpose descriptions in AI processing activities. That consultation concluded that "the vast majority of all generative AI models currently fall short in terms of legitimacy" under GDPR.
The February 2026 warning is different in character - it addresses not the development of AI models but their operational deployment as autonomous agents with system-level access. The distinction matters. A generative AI chatbot accessed via a browser poses different risks than an agent that can read email, run code, access local files and interact with third-party services without human approval for each step.
The EU AI Act dimension
At the European regulatory level, the AP is calling for clarification that autonomous AI agents like OpenClaw fall within the scope of the EU AI Act. The AI Act, which began its phased application in 2024 and entered its most stringent obligations in August 2025, sets product safety requirements for AI systems. According to the AP, those requirements should cover systems like OpenClaw so that unsafe applications can be excluded from the market.
That call for scope clarification reflects genuine legal ambiguity. The AI Act's classification of systems by risk level - from minimal risk to unacceptable risk - was designed with identifiable AI applications in mind. Open-source autonomous agents operating locally present classification challenges that the regulation's drafters did not fully anticipate. The AP is not alone in identifying this gap. The European Data Protection Board has been working to align GDPR and AI Act frameworks since late 2024, and the tension between the two regulatory regimes remains unresolved.
Meanwhile, proposals circulating since November 2025 to amend the GDPR through the EU's Digital Omnibus initiativewould, if adopted, establish AI training as a legitimate interest under the regulation - a shift that privacy advocates argue could further complicate accountability for AI-related data incidents. The Netherlands has itself raised concerns about those proposals. A formal Dutch government submission expressed support for genuine simplification while rejecting changes that weaken core protections.
Why this matters for marketing and advertising organisations
The AP's warning, while addressed to users and organisations broadly, has specific implications for marketing and advertising teams. Agentic AI has been moving rapidly into advertising workflows. The advertising industry spent much of late 2025 building infrastructure for AI agents capable of managing campaigns autonomously, with Amazon, Google, Yahoo and others all announcing or expanding autonomous systems. IAB Tech Lab launched monthly in-person Agentic AI Boot Camps beginning February 12, 2026 - the same day the AP issued its warning.
Marketing organisations hold substantial volumes of the exact data the AP identifies as high-risk. CRM systems contain customer contact details. Campaign management tools hold API keys for advertising platforms. Agency environments routinely access client financial records and identity documents as part of media spend management. An AI agent granted full system access in such an environment - and then compromised through a plug-in or a prompt injection attack - could exfiltrate data at a scale difficult to detect before the damage was done.
AI agents have already demonstrated the capacity to masquerade as humans and bypass security systems, with research from December 2025 documenting how AI agents from major platforms rotate IP addresses and spoof user agents to evade detection. The AP's Trojan Horse framing extends that concern inward: the risk is not only agents scraping external data, but agents being turned against the systems they are installed on.
Industry debate about agentic AI protocols has centred on efficiency and standardisation, but security voices have been consistent. David Kohl, an industry veteran commenting on the Ad Context Protocol in October 2025, warned that "the next generation of agentic automation must bake in stronger security and authentication from the start." The AP's warning gives regulatory weight to that concern.
The plug-in ecosystem is a particularly acute problem. OpenClaw's extensibility through third-party plug-ins mirrors the browser extension and app store models that have repeatedly generated malware distribution problems in other contexts. The finding that approximately one-fifth of available plug-ins contain malware is a significant proportion. In a professional environment where an agent might be granted access to financial systems, client records and advertising platform APIs, a single malicious plug-in represents a substantial breach risk.
The open-source accountability gap
Open-source software occupies an unusual position in regulatory frameworks. Its transparency - anyone can inspect the code - is often cited as a security advantage. The AP's warning implicitly rejects the assumption that openness equals safety in the context of AI agents. The risks it identifies are not primarily in the core code but in the ecosystem around it: plug-ins of unknown provenance, configurations that can be set up incorrectly, and a platform architecture that exposes linked services to any attacker who can manipulate the agent's behaviour through prompt injection.
The GDPR's principle of data protection by design and by default requires that privacy protections are built into systems from the outset. An open-source platform that allows users to install arbitrary plug-ins and grant full system access does not, on its face, meet that standard. Whether regulators will bring enforcement action against specific OpenClaw deployments will depend on whether identifiable data controllers can be held responsible for processing decisions made through the platform.
For organisations that have been evaluating AI agent tools as part of broader automation strategies, the AP warning is a signal that regulatory scrutiny of these deployments is beginning in earnest - not at the level of hypothetical risk assessment but as a concrete, named-system advisory from a national data protection authority. Other European regulators may follow.
Timeline
- December 18, 2024 - European Data Protection Board clarifies privacy rules for AI models, establishing that AI models trained on personal data cannot automatically be considered anonymous.
- May 23, 2025 - Dutch Data Protection Authority releases GDPR preconditions consultation for generative AI, finding the majority of current AI models fall short of legal requirements.
- July 21, 2025 - Industry analyst argues agentic AI threatens traditional DSP business models, warning of fundamental disruption to advertising technology infrastructure.
- August 2, 2025 - AI Act's most stringent obligations enter into application, including prohibition on social scoring.
- September 16, 2025 - Former ECB chief Mario Draghi calls for GDPR simplification and AI Act pause at Brussels conference, citing competitive pressures from the US and China.
- October 15, 2025 - Ad Context Protocol launched by six advertising companies, sparking debate about security and authentication standards in agentic advertising.
- November 2025 - European Commission circulates internal draft amendments to GDPR through Digital Omnibus initiative, proposing to establish AI training as legitimate interest.
- November 16, 2025 - Dutch regulator publishes consultation on AI social scoring prohibition under Article 5 of the AI Act.
- December 13, 2025 - Netherlands raises formal concerns about EU Digital Omnibus privacy changes, opposing weakening of core data protection protections.
- January 6, 2026 - Research documents AI agents masquerading as humans to bypass website defenses, including xAI's Grok generating 16 requests from 12 IP addresses with spoofed user agents.
- January 10, 2026 - Agentic AI infrastructure dominates advertising week, with IAB Tech Lab announcing monthly Agentic AI Boot Camps beginning February 12, 2026.
- February 12, 2026 - The Autoriteit Persoonsgegevens issues formal warning against OpenClaw and similar open-source AI agents, citing malware in plug-ins, prompt injection vulnerabilities, remote code execution risks, and GDPR compliance obligations.
Summary
Who: The Autoriteit Persoonsgegevens (AP), the Dutch national data protection authority, issued the warning. The affected parties include individual users, organisations processing personal data, parents of minors, and the broader European regulatory community, including the EU AI Act framework bodies.
What: The AP formally warned against the use of OpenClaw and similar open-source autonomous AI agent systems, identifying four distinct risk categories: malware-infected plug-ins affecting approximately one-fifth of available extensions; indirect prompt injection attacks via websites, emails and messages; critical remote code execution vulnerabilities; and misconfiguration risks that can expose personal data publicly. The regulator also called for these systems to be formally brought within the scope of the EU AI Act.
When: The warning was published on February 12, 2026.
Where: The Netherlands, with European-level implications. The AP called on European regulators to clarify the AI Act's scope to cover autonomous agents, a question relevant to all EU member states. The affected systems run locally on users' computers but interact with cloud-connected services including Google, Facebook and Apple ID.
Why: The rapid growth in popularity of OpenClaw created urgency. Open-source AI agents that grant full system access to autonomous processes - without adequate security standards, without plug-in vetting, and without protections against prompt injection - pose risks disproportionate to the user understanding of those risks. GDPR compliance obligations remain in force regardless of the software's open-source status, and the regulator determined that users and organisations needed explicit guidance before widespread deployments led to reportable data breaches.