Google Chrome today became the subject of a detailed technical investigation revealing that the browser installs a 4 GB artificial intelligence model on users' devices without requesting consent - and reinstalls it automatically if the user deletes it. The disclosure, published on May 4, 2026, by privacy researcher Alexander Hanff on his blog That Privacy Guy, documents the silent delivery of Gemini Nano weight files across an estimated hundreds of millions of devices. Hanff argues the practice breaches European privacy law and carries measurable environmental costs at scale.
For the marketing and advertising community, the findings raise immediate questions about browser-level AI deployment, consent architecture, and the legal framework governing how software installed on user machines can interact with personal data. As Chrome holds above 64% of the global browser market, according to Hanff's cited StatCounter data, the scope of this deployment dwarfs most enterprise software rollouts.
What was found on disk
The file at the center of the report is named weights.bin. It sits inside a directory called OptGuideOnDeviceModel, located within the Chrome user profile on Windows, macOS, and Linux systems. According to Hanff, this is the weights file for Gemini Nano, Google's on-device large language model. The file measures approximately 4 GB.
Chrome does not surface any consent dialogue before writing this file. There is no checkbox in Chrome settings labelled to indicate that a multi-gigabyte AI model will be downloaded. The download triggers when Chrome's AI features are active - and those features are active by default in recent Chrome versions. According to the researcher's findings, on any machine that meets the minimum hardware requirements to host the model, Chrome treats the device as a delivery target and writes the binary.
A Google spokesperson, responding to questions from Gizmodo, confirmed the model's existence. According to the statement, "We've offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model. It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud." The spokesperson added that in February 2026, Google began rolling out a setting allowing users to turn the model off and remove it directly in Chrome settings, stating: "Once disabled, the model will no longer download or update."
What makes Hanff's report technically distinct from earlier community forum posts about the same file is the forensic methodology. He created a fresh Chrome user-data directory on April 23, 2026, specifically for an automated privacy audit running 100 sites. The audit driver interacted with the browser entirely through Chrome DevTools Protocol. No human keyboard or mouse input touched the profile at any point.
Four-way evidence chain
Using macOS's kernel-level filesystem event log, .fseventsd, Hanff reconstructed the precise sequence in which the file arrived. The log records every file creation, modification, and deletion at the operating system level, independent of any application. Chrome cannot edit it. According to the timestamps Hanff published, the installation unfolded as follows.
On April 24, 2026, at 16:38:54 CEST, Chrome created the OptGuideOnDeviceModel directory inside the audit profile. Eight minutes and 28 seconds later, at 16:47:22 CEST, three concurrent unpacker subprocesses spawned temporary directories and began writing weights.bin, manifest.json, _metadata/verified_contents.json, and on_device_model_execution_config.pb. Chrome batched a security update, a preload data refresh, and the 4 GB AI model into the same idle window. At 16:53:22 CEST, the unpacked weights file moved to its final location at OptGuideOnDeviceModel/2025.8.8.1141/weights.bin. Four additional model targets - numbered 40, 49, 51, and 59 in Chrome's optimization-guide enumeration - registered simultaneously in Chrome's optimization_guide_model_store. None of those targets had existed in the profile before.
Total install time from directory creation to final placement: 14 minutes and 28 seconds. Total human action against the profile during that window: none.
Three further pieces of corroborating evidence sit in the same machine. Chrome's own Local State JSON for the audit profile contains an optimization_guide.on_device block confirming the model ran and recording version 2025.8.8.1141 - matching the path component logged by .fseventsd. The same block reports performance_class: 6 and vram_mb: 36864, meaning Chrome read the GPU and unified memory configuration to determine hardware eligibility before any AI feature was visible to the user. Chrome's ChromeFeatureState file lists OnDeviceModelBackgroundDownload in the enabled features block - this is the flag that triggers the silent download. The settings page for on-device AI is enabled in lockstep with the install, which means the interface through which a user could theoretically refuse the download does not exist until the download has already started. According to Hanff, this is design, not an oversight. Finally, GoogleUpdater logs record the on-device-model control component - app ID {44fc7fe2-65ce-487c-93f4-edee46eeaaab} - arriving from Google's CDN on April 20, 2026, three days before the audit profile was created.
The reinstall loop
According to Hanff, the cycle of deletion and re-download has been documented across multiple independent reports on Windows installations going back more than a year. The user deletes the file; Chrome re-downloads it. The user deletes it again; Chrome re-downloads it again. The only ways to make the deletion persist are to disable Chrome's AI features through chrome://flags or enterprise policy tooling - tools that ordinary home users do not generally have configured. On macOS the file is technically deletable because it lands with mode 600 owned by the user, but Chrome holds the install state in a file called Local State, and as soon as the variations server next instructs Chrome that the profile is eligible, the download fires again.
Gizmodo reported this behavior on May 6, 2026, noting that according to Hanff, the Nano model is installed on any device meeting the minimum hardware requirements, without any opt-out prompt. For users who have received Google's February 2026 settings update, the process is: open Chrome, go to Settings, select System, and choose the on-device AI toggle. For users who have not received that setting, disabling relevant flags in chrome://flags - including optimization guide on device, Prompt API for Gemini Nano, Summarizer API, Writer API, Rewriter API, and Proofreader API - and relaunching Chrome before deleting the OptGuideOnDeviceModel folder is the documented workaround. Digital analytics professional Himanshu Sharma shared similar steps on LinkedIn on May 7, 2026, confirming the file's presence on a personal device.
What the AI Mode pill actually does
One of the more pointed sections of Hanff's investigation concerns Chrome 147's omnibox - the address bar - which renders an "AI Mode" pill to the right of the URL field on eligible profiles. A user seeing that element, knowing a 4 GB on-device model exists on their disk, could reasonably infer their queries stay local. That inference is wrong.
According to Hanff, the AI Mode pill in the Chrome 147 omnibox is a cloud-backed Search Generative Experience surface. Every query typed into it is sent over the network to Google's servers for processing by Google's hosted models. The on-device Nano model is not invoked by the AI Mode interface at all. The features that do use the local model - Help Me Write in text areas, tab-group AI suggestions, smart paste, page summary - are accessible through context menus that most users will never reach.
The on-device install therefore imposes a storage and bandwidth cost on the user while delivering no local-processing benefit at the browser surface where users actually see AI. PPC Land reported in April 2026 that AI Mode runs on a custom Gemini model built for Search, not on the on-device binary, and that the technical infrastructure uses what Google calls a query fan-out technique across multiple data sources.
Hanff argues this arrangement engages at least three deceptive design pattern families catalogued in EDPB Guidelines 03/2022: misleading information, because the "AI Mode" label does not indicate that queries go to the cloud; skipping, because the user has no moment to choose between local and cloud AI processing; and hindering, because the controls to disable AI Mode and remove the on-device model are in separate locations that most users would not locate without external guidance.
Legal analysis: ePrivacy, GDPR, and beyond
Hanff's legal analysis focuses first on Article 5(3) of Directive 2002/58/EC, the ePrivacy Directive. That provision prohibits storing information in the terminal equipment of a user without prior, freely-given, specific, informed, and unambiguous consent, except where strictly necessary for an explicitly requested service. According to Hanff, the 4 GB Gemini Nano weights file is information stored in the user's terminal equipment, the user did not consent, and Chrome is functional without the file - meaning the strict necessity exception does not apply. The Article 5(3) breach is, in his assessment, direct.
Article 5(1) GDPR requires processing to be lawful, fair, and transparent. Hanff argues that where the user's hardware is profiled to determine eligibility for the model push - Chrome reads GPU class, CPU class, system RAM, and available VRAM - and where the install events are logged on Google's servers, the user must be told in plain language what is happening. They are not.
Article 25 GDPR's data protection by design obligation requires that, by default, only personal data necessary for each specific purpose are processed. Pre-staging a 4 GB AI model on a user's disk against a contingency that the user might invoke an AI feature in the future is, according to Hanff, the architectural opposite of by-default minimisation. Under the UK GDPR and Privacy and Electronic Communications Regulations 2003 the analysis is identical. Under the California Consumer Privacy Act, the absence of a notice-at-collection covering this category of pre-staged software puts Google's CCPA notice posture in question.
In the comments on his LinkedIn post, Sharma wrote: "It seems Google is going to get the biggest GDPR fine of this century. I don't know what they were thinking. That no one would notice."
This is not an isolated pattern in the industry. Two weeks before publishing the Chrome investigation, Hanff had reported that Anthropic's Claude Desktop application silently registered a Native Messaging bridge in seven Chromium-based browsers on every machine where Claude Desktop was installed, also without user consent and also with automatic re-installation on each Claude Desktop launch. The Anthropic case involved a 350-byte JSON manifest across an estimated 3 million Claude Desktop user devices. The Chrome case involves 4 GB across, by Hanff's mid-band estimate, approximately 500 million devices.
The pattern of forced bundling across trust boundaries, invisible defaults, more difficult removal than installation, and automatic re-installation is, in Hanff's categorisation, identical across both companies.
Environmental cost at scale
The environmental section of Hanff's report is based on a methodology he describes as the same one his WebSentinel platform applies to website environmental analysis. The calculation uses an energy intensity of 0.06 kilowatt-hours per gigabyte for network data transfer, derived from Parssinen et al. (2018) in the journal Science of The Total Environment, and a grid emissions factor of 0.25 kg CO2-equivalent per kilowatt-hour from the EEA/IEA composite EU-27 electricity factor for 2024 reporting.
Per device, per push: 4 GB transferred at 0.06 kWh/GB equals 0.24 kWh of energy, at 0.25 kg CO2e/kWh equals 0.06 kg CO2-equivalent. That is the one-time delivery cost to a single device. Aggregated across the deployment, Hanff presents three bands.
At 100 million devices - a low-band estimate of roughly 3% of Chrome's user base - the total is 400 petabytes of data, 24 gigawatt-hours of energy, and approximately 6,000 tonnes of CO2-equivalent. At 500 million devices, the figures are 2 exabytes, 120 GWh, and 30,000 tonnes of CO2e. At 1 billion devices, the total reaches 4 exabytes, 240 GWh, and 60,000 tonnes CO2e. For reference, Hanff notes that 24 GWh is roughly the annual electricity consumption of approximately 7,000 average UK households, and 30,000 tonnes CO2e is comparable to one return flight from London to Sydney for roughly 8,000 economy passengers.
These figures do not include re-download cycles triggered by users deleting the file, subsequent model updates, on-device inference energy when Nano is actually used, or the embodied carbon cost of SSD storage. In ESG reporting terms, Hanff categorises the push as a Scope 3 Category 11 emission attributable to Google under the Corporate Sustainability Reporting Directive framework.
For users on metered mobile data connections - common across sub-Saharan Africa, South and Southeast Asia, and Latin America where smartphones are the primary internet device - a 4 GB unrequested download can represent approximately one month's data allowance. Google has not, according to Hanff, published any welfare impact analysis of this on metered-access populations.
Context for the marketing and advertising industry
Chrome's role in advertising infrastructure is substantial. As PPC Land reported in August 2025, Chrome commands significant browser market share and serves as a critical access point for Google's search and advertising business, enabling the company to track user activity and direct traffic to services including Gemini. The browser is the surface through which a large share of programmatic advertising is delivered and measured.
The deployment of Gemini Nano is partly motivated by advertising-adjacent functionality. PPC Land reported in May 2025 that Google integrated Gemini Nano into Chrome's Enhanced Protection mode to detect tech support scams in real time. According to that announcement, Gemini Nano "operates directly on the device, delivering real-time insights into potentially risky websites," and the on-device approach enables detection of threats that exist for less than 10 minutes - too briefly for server-side databases to flag. The security use case is genuine, but the model's delivery without consent remains the issue Hanff documents.
PPC Land covered in September 2025 how Chrome's largest-ever upgrade integrated Gemini AI across ten features, including scam detection and notification management, on September 18, 2025. The integration of a cloud-backed AI surface alongside a silently installed local model represents the dual-layer architecture Hanff describes as potentially misleading to users.
The broader consent question has been building across the industry. As PPC Land reported in June 2025, Google faced criticism from Workspace users when it bundled Gemini AI features into subscriptions without offering opt-out mechanisms. The Chrome case involves no subscription - it affects every eligible desktop Chrome user by default, regardless of whether they have ever interacted with any AI feature.
Regulatory context is tightening further. A German administrative court ruled in March 2025 that Google Tag Manager requires explicit user consent under TTDSG and GDPR before activation, finding that automatic storage of a customised JavaScript file on user devices before consent collection violated both frameworks. The legal logic applied to tag management - that non-essential software written to a user's device without prior consent is unlawful - maps closely to Hanff's analysis of the Nano weights file.
According to Google's own Chrome terms of service, the company's services include downloadable software that sometimes updates automatically. The terms do not, however, according to Hanff, disclose with adequate prominence that using Chrome will result in a 4 GB AI model being written to the user's disk. The documentation exists for administrators who look for it, but it is not surfaced at the moment an ordinary user installs Chrome or at the moment Chrome decides to begin the push.
The question Hanff poses at the end of his report is institutional: when will regulators and public prosecutors begin enforcing laws that have been in place since 2002 against conduct of this kind? That question does not have an answer yet - but the forensic record Hanff has compiled, drawing on macOS kernel logs, Chrome's own per-profile state files, runtime feature flags, and Google's component-updater logs, represents the kind of documented, timestamped evidence that regulatory proceedings typically require.
Timeline
- December 2023: Google introduces Gemini AI as its most capable model, available in Bard, Search, and Pixel devices.
- January 2024: Chrome receives three new AI features including Tab Organizer, custom AI themes, and a Help Me Write function in Chrome M121.
- 2024 (general): Google begins offering Gemini Nano for Chrome, according to the company's own statement to Gizmodo, as an on-device model for security and developer API use.
- August 2024: Chrome introduces AI search features including Google Lens for desktop, Tab compare, and enhanced history search, raising first data privacy considerations about AI in browsers.
- November 2024 onward: Reports of the
OptGuideOnDeviceModeldirectory andweights.binfile begin circulating in community forums across Windows installations, with users asking how to remove the file. - March 19, 2025: German court rules Google Tag Manager requires explicit consent under TTDSG and GDPR before writing files to user devices, establishing directly applicable legal precedent.
- May 8, 2025: Google announces Gemini Nano integration into Chrome's Enhanced Protection mode for scam detection, as covered by PPC Land.
- June 10, 2025: Google faces Workspace user backlash over forced Gemini bundling without opt-out options.
- September 18, 2025: Chrome's largest-ever AI upgrade deploys ten Gemini-powered features including scam detection and agentic browsing.
- February 2026: Google begins rolling out a Chrome settings toggle allowing users to turn off on-device AI and remove the model, according to the company's statement to Gizmodo.
- April 18, 2026: Researcher Alexander Hanff publishes a prior investigation documenting Anthropic's Claude Desktop silently installing a Native Messaging bridge in seven Chromium-based browsers without consent.
- April 20, 2026: GoogleUpdater logs record the on-device-model control component arriving on Hanff's test machine, three days before his audit profile is created.
- April 23, 2026: Hanff creates a fresh Chrome user-data directory for an automated 100-site privacy audit running entirely via Chrome DevTools Protocol with zero human input.
- April 24, 2026, 16:38:54 CEST: Chrome creates the
OptGuideOnDeviceModeldirectory in the audit profile, as recorded by macOS.fseventsdkernel logs. - April 24, 2026, 16:47:22 CEST: Three concurrent Chrome unpacker subprocesses write
weights.binand associated files to temporary directories. - April 24, 2026, 16:53:22 CEST: The unpacked
weights.binmoves to its final location atOptGuideOnDeviceModel/2025.8.8.1141/weights.bin. Total install time from directory creation to final placement: 14 minutes and 28 seconds. - April 29, 2026: Hanff discovers the 4 GB
OptGuideOnDeviceModeldirectory during a routine disk-usage check of the audit profile. - April 2026: PPC Land reports on AI Mode in Chrome opening publisher links side by side, noting the feature runs on a cloud-backed Gemini model separate from the on-device binary.
- May 4, 2026: Hanff publishes "Google Chrome silently installs a 4 GB AI model on your device" on his That Privacy Guy blog, documenting the forensic evidence chain and legal analysis.
- May 6, 2026: Gizmodo publishes its report citing Hanff's findings, including Google's official response confirming the model and referencing the February 2026 opt-out setting.
- May 7, 2026: Digital analytics professional Himanshu Sharma shares confirmation of the file on LinkedIn, noting that deletion triggers re-download and outlining
chrome://flagsworkaround steps.
Summary
Who: Alexander Hanff, privacy researcher and founder of the WebSentinel forensic audit platform, documented the behaviour. Google Chrome is the software involved. The affected population is an estimated hundreds of millions of Chrome desktop users worldwide whose hardware meets Chrome's eligibility criteria.
What: Google Chrome silently downloads and installs a approximately 4 GB weights file for Gemini Nano - Google's on-device large language model - into a directory called OptGuideOnDeviceModel in the Chrome user profile. The file is installed without a consent prompt, without a visible settings option presented before installation on many devices, and reinstalls automatically if the user deletes it. The AI Mode interface visible in Chrome 147's omnibar does not use this local model; it routes to Google's cloud servers. The on-device model powers background features including scam detection, Help Me Write, and tab-group suggestions.
When: Google has been rolling out Gemini Nano to eligible Chrome devices since 2024, according to the company's own statement. The specific forensic demonstration was conducted between April 23 and April 29, 2026. Hanff published his findings on May 4, 2026. Google began rolling out an opt-out toggle in February 2026, though not all users had received it at the time of publication.
Where: The file is written to the Chrome user profile directory on Windows, macOS, and Linux desktop systems. The issue affects Chrome users globally wherever the browser ships against supported desktop hardware. The legal analysis under ePrivacy Directive Article 5(3) and GDPR applies in the European Economic Area and the UK.
Why: Google's stated purpose is to power on-device AI features including scam detection and developer APIs without sending user data to the cloud. The privacy and regulatory concern is that the download happens without the user's prior consent, that removal is made deliberately difficult, and that the visible AI interface does not use the local model - meaning users bear the storage and bandwidth cost without the privacy benefit the local model would theoretically provide if it powered the user-facing AI surface.