Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Zak Killian

Google Chrome 'silently' downloads 4GB AI model to your device without permission, report claims — researcher says practice may violate EU law, waste thousands of kilowatts of energy

Chrome.

Security researcher Alexander Hanff, also known as "That Privacy Guy," has published a new analysis claiming that Google Chrome is silently downloading a roughly 4GB on-device AI model to users' machines without notice or consent. According to Hanff, the behavior mirrors a separate issue he recently identified involving Anthropic's desktop software, and together the two cases point to a broader pattern of how large tech companies deploy AI features.

Hanff's earlier report focused on Anthropic's Claude Desktop app, which he says quietly installed a browser integration bridge across multiple Chromium-based browsers on a system, including five browsers he did not even have installed. According to the researcher, this happened without any user prompt or meaningful disclosure, and the integration would reinstall itself if removed. He argues that this kind of silent modification of a user's environment violates both user expectations and, in his view, European privacy law.

That earlier finding serves as context for what Hanff describes as a similar but even larger-scale issue with Chrome. In his latest post, he says Chrome is now writing a file called "weights.bin" to disk, part of the company's on-device AI system based on its lightweight Gemini Nano model. The file is approximately 4GB in size and is downloaded automatically on systems that meet certain hardware requirements. According to Hanff, there is no clear consent flow for this download. He says Chrome does not present a prompt explaining that a multi-gigabyte AI model will be stored locally, nor does it provide a straightforward setting to prevent it. Users who discover and delete the file will find it re-downloaded later unless they disable certain experimental flags or remove Chrome entirely.

To verify what was happening, Hanff conducted a controlled test using a fresh Chrome profile on macOS. He relied on the operating system's filesystem event logs, which record file activity independently of applications. According to his analysis, the browser created the model directory and downloaded the full 4GB payload in the background while no human interaction was taking place. The process completed in just over fourteen minutes, during what appeared to be idle browsing time. He also points to Chrome's own internal state files as corroborating evidence. These show that the browser evaluated the system's hardware capabilities and marked it as eligible for the on-device model before the download occurred. In Hanff's telling, this indicates that Chrome is proactively deciding which users' machines should receive the model, rather than responding to an explicit user action.

Beyond the technical details, Hanff raises legal concerns. He argues that both the Anthropic case and the Chrome case likely violate provisions of EU law, including the ePrivacy Directive's rules on storing data on user devices and the GDPR's requirements around transparency and lawful processing. These claims have not been tested in court, but they reflect a growing tension between aggressive feature rollout and regulatory expectations, particularly in Europe.

Environmental cost of Gemini Nano deployment in Chrome

Devices receiving the push

Total bytes pushed

Total energy

Total CO2e

100 million (~3% of Chrome users)

400 petabytes

24 GWh

6,000 tons CO2e

500 million (~15% of Chrome users)

2 exabytes

120 GWh

30,000 tons CO2e

1 billion (~30% of Chrome users)

4 exabytes

240 GWh

60,000 tons CO2e

(Data above calculated by Alexander Hanff)

A key focus of Hanff's post is the environmental cost of silently distributing a 4GB AI model, where he highlights the perils of distributing a file of this size on a global scale. If deployed across hundreds of millions or billions of devices, Hanff estimates the total emissions impact of simply distributing the file (not even using it) could reach tens of thousands of tons of CO2 equivalent, an amount similar to the annual output of tens of thousands of cars. That estimate depends heavily on possibly dubious assumptions about scale and energy mix, but his broader point, that pushing large binaries to user devices is not free and the cost is externalized, is completely valid regardless of the math.

For many users, the more immediate concern is bandwidth. A 4GB download is trivial on an unlimited fiber connection, but that is very much not the global norm, nor is it common even in the United States. For users whose data is capped, metered, or expensive, including most of the developing world, silently transferring gigabytes of data can have real financial consequences. Even in developed markets, users on mobile hotspots or rural connections may feel the impact acutely. Hanff argues that downloading files of this size without clear notice or consent crosses a very clearly demarcated line, regardless of the feature being delivered.

Taken together, the two cases reinforce a familiar criticism of large technology platforms. According to Hanff, both Anthropic and Google acted first and left users to discover the consequences later. Whether it is silently registering deep system integrations (in the case of Claude Desktop) or downloading multi-gigabyte AI models in the background, the pattern is the same: the user's device is being treated as a deployment target rather than something the user actively controls. That framing may sound harsh, but it aligns with long-standing complaints about "dark patterns" in software design. Features that benefit the platform at the user's cost are enabled by default, buried behind obscure settings, or implemented in ways that make them difficult to remove. Hanff's reporting suggests that the shift toward on-device AI is not changing that dynamic, and in fact may be accelerating it.

Google has not publicly responded in detail to Hanff's findings at the time of writing, and the company may argue that these downloads are tied to legitimate product features and improve privacy by keeping AI processing local. Even so, the core question remains unresolved. If a browser is going to download gigabytes of data onto a user's machine, should that require an explicit opt-in? Hanff's answer is clearly yes. Whether regulators or users ultimately agree may determine how far companies can push this kind of behavior in the future.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.