Google's Documentation Says API Keys Are Secrets and Also Not Secrets. 2,863 Verified Keys Are Already Exposed.
Google's Firebase security checklist reads: "You do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code." Google's Gemini API key documentation reads: "Treat your Gemini API key like a password." Both pages are live right now, on the same company's documentation, governing the same AIza... key format.
That contradiction is not a typo. It is the surface-level symptom of an architectural flaw that has left 2,863 verified API keys on public websites silently authenticating to Gemini endpoints, 35,000 Google API keys hardcoded in Android apps exposed to the same risk, and at least one solo developer facing $82,314.44 in unauthorized charges accumulated in 48 hours.
On February 25, 2026, security researchers at Truffle Security published the disclosure that tied it all together. Google had spent 90 days on the report. The root-cause fix was still not deployed when the disclosure window closed. Google's initial response to the vulnerability report: "Intended Behavior."
Truffle Security Found 2,863 Live Keys on the Open Internet
Joe Leon of Truffle Security published the disclosure on February 25, 2026, in a blog post titled "Google API Keys Weren't Secrets. But then Gemini Changed the Rules." The research scanned the November 2025 Common Crawl dataset, a roughly 700 TiB archive of 2.29 billion publicly scraped web pages containing HTML, JavaScript, and CSS, and identified 2,863 live Google API keys in the AIza... format that were verified as vulnerable.
The verification method was straightforward. Each key was tested against the Gemini API's /models listing endpoint at https://generativelanguage.googleapis.com/v1beta/. Instead of returning 403 Forbidden, vulnerable keys returned 200 OK with a list of available Gemini models. With a valid key, three sensitive endpoint categories were accessible:
/files/: uploaded datasets, documents, PDFs, and images stored through the Gemini API/cachedContents/: cached context data including system prompts, large documents, and proprietary business logic stored for repeated use/models: listing of all available Gemini models in the project, providing reconnaissance information
Beyond data exfiltration, any valid key also grants full inference capability, meaning the ability to make billable Gemini API calls charged to the victim's account.
Truffle Security assigned two CWE classifications: CWE-1188 (Insecure Default Initialization of Resource) and CWE-269 (Improper Privilege Management). The affected organizations included major financial institutions, security companies, global recruiting firms, and Google itself.
The Proof of Concept Used Google's Own Infrastructure
The most striking demonstration involved a key embedded in the page source of a Google product's public-facing website. Using the Internet Archive, Truffle Security confirmed this key had been publicly deployed since at least February 2023, well before the Gemini API existed. There was no client-side logic on the page attempting to access any generative AI endpoints. The key was used solely as a public project identifier, which was standard practice for Google services. When tested against the Gemini API's /models endpoint, it returned 200 OK.
As Leon wrote: a key that was deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.
How the Privilege Escalation Works
The core mechanism is a retroactive, silent privilege expansion. Google Cloud uses a single API key format (AIza...) for two fundamentally different purposes: public identification (Maps, Firebase) and sensitive authentication (Gemini).
Here is the sequence:
Step 1. A developer creates an API key and embeds it in a website for Google Maps. At this point, the key functions as a harmless billing identifier. Google's own documentation says so.
Step 2. A different team member, or the same developer months later, enables the Generative Language API (Gemini) on the same Google Cloud project. All existing API keys in that project silently gain access to Gemini endpoints, with no warning, no confirmation dialog, and no email notification.
Step 3. The developer who deployed the Maps key is never informed that the key's privileges changed.
Additionally, new keys created through the Google Cloud Console default to "Unrestricted," meaning they are valid for every enabled API in the project, including Gemini. This makes the escalation a platform-level design flaw, not a developer misconfiguration.
Google Took 90 Days and Still Had Not Deployed a Root-Cause Fix
The disclosure timeline, documented in Truffle Security's blog post, proceeded as follows:
| Date | Event |
|---|---|
| November 21, 2025 | Truffle Security submitted the report to Google's Vulnerability Disclosure Program |
| November 25, 2025 | Google initially classified the behavior as "Intended Behavior" and dismissed the report |
| December 1, 2025 | Truffle Security pushed back with concrete examples from Google's own infrastructure |
| December 2, 2025 | Google reclassified from "Customer Issue" to "Bug," upgraded severity, confirmed product team evaluating a fix, and requested the full list of 2,863 exposed keys (which Truffle provided) |
| December 12, 2025 | Google shared a remediation plan: internal pipeline to discover leaked keys, began restricting exposed keys from accessing Gemini, committed to root cause fix before disclosure date |
| January 13, 2026 | Google classified the vulnerability as "Single-Service Privilege Escalation, READ" (Tier 1) |
| February 2, 2026 | Google confirmed the team was still working on the root-cause fix |
| February 19, 2026 | 90-day disclosure window expired with the root-cause fix still not deployed |
| February 25, 2026 | Truffle Security published its findings |
Truffle's assessment of the process: the initial triage was frustrating, as the report was dismissed as "Intended Behavior." But after providing concrete evidence from Google's own infrastructure, the GCP VDP team took the issue seriously. They also noted: "Building software at Google's scale is extraordinarily difficult, and the Gemini API inherited a key management architecture built for a different era."
Google's Official Statement
Google provided a nearly identical statement to multiple outlets, including BleepingComputer (published February 26, 2026), Cybernews, and The Stack: "We are aware of this report and have worked with the researchers to address the issue. Protecting our users' data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API."
Google's Remediation Roadmap Has Three Pillars and Three Critical Gaps
Google's stated remediation, referenced at ai.google.dev/gemini-api/docs/troubleshooting, consists of three measures.
What IS in the roadmap:
- Scoped defaults. New API keys created through AI Studio will default to Gemini-only access, preventing unintended cross-service usage.
- Leaked key blocking. Google is blocking API keys that are identified as leaked from accessing the Gemini API.
- Proactive notifications. Google plans to communicate proactively when they identify leaked keys, prompting developer action.
What is NOT in the roadmap:
- No retroactive audit of existing keys. Truffle Security noted: "We'd love to see Google go further and retroactively audit existing impacted keys and notify project owners who may be unknowingly exposed, but honestly, that is a monumental task."
- No confirmed individual notifications to all 2,863 affected project owners. As Awesome Agents reported, Google has not confirmed a plan to notify each affected project owner individually.
- No architectural key separation. The fundamental design flaw, a single
AIza...key format serving both public identification and sensitive authentication, remains unaddressed. Truffle Security posed the open question of whether Gemini will eventually adopt a different authentication architecture.
A Developer in Vietnam Faces Bankruptcy Over $82,314.44 in Stolen Charges
A solo developer in Vietnam posted to r/googlecloud on approximately February 26, 2026, describing how a stolen Google Cloud API key resulted in $82,314.44 in unauthorized Gemini API charges accumulated between February 11 and 12, 2026, roughly 48 hours. The developer's normal monthly spend was approximately $180/month.
The developer had billing alerts configured, but as Boing Boing reported on February 27, 2026, billing alerts are notifications, not circuit breakers. By the time the developer read the email, the meter had been running for hours. Google has not forgiven the charges. The developer reported he is facing bankruptcy, as $82,000 is described as a life-altering sum in Vietnam.
Boing Boing drew a pointed comparison: if someone steals your credit card and racks up fraudulent charges, your bank will reverse them. If someone steals your cloud API key and racks up $82,000 in compute charges over a weekend, you owe Google the money.
Other Reported Victims
The Reddit thread was described by Boing Boing as "filled with similar stories." Additional documented cases include:
- A student billed $55,444 after exposing a GCP API key on GitHub in June. Google later waived those charges, per CIO.
- A user on the Google AI Developers Forum reporting €346 in unauthorized Gemini 2.5 Pro charges escalating from €18 in 4 days.
- Another user reporting a $6,909 bill for over a billion GenerativeLanguage.GenerateContent requests (99% of which returned 429 errors) from a key created during a live Google training session that was never shared.
Google's Own Documentation Contradicts Itself on Whether Keys Are Secrets
As of March 2, 2026, Google's documentation delivers directly contradictory guidance depending on which page a developer reads.
Gemini API Keys page (ai.google.dev/gemini-api/docs/api-key, last updated January 13, 2026): This page explicitly states: "Treat your Gemini API key like a password. If compromised, others can use your project's quota, incur charges (if billing is enabled), and access your private data, such as files." It further instructs: "Never expose API keys on the client-side" and "Never commit API keys to source control." This page predates the Truffle Security disclosure by six weeks, meaning Google already had security-aware guidance for Gemini-specific keys before the vulnerability was publicly reported.
Firebase Security Checklist (firebase.google.com/support/guides/security-checklist, last updated January 21, 2026): The page still contains a section headed "API keys for Firebase services are not secret" with the exact text: "Firebase uses API keys only to identify your app's Firebase project to Firebase services, and not to control access to database or Cloud Storage data, which is done using Firebase Security Rules. For this reason, you do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code."
Firebase API Keys page (firebase.google.com/docs/projects/api-keys, last updated January 19, 2026): States "API keys for Firebase services are OK to include in code or checked-in config files." This page does contain a caveat recommending separate keys for the Google AI Gemini API, but frames it as a general best practice rather than a security vulnerability warning. It does not warn that enabling Gemini on a project retroactively grants existing public keys Gemini access.
Google Maps JavaScript API documentation (developers.google.com/maps/documentation/javascript/get-api-key, last updated February 27, 2026, two days after disclosure): Still instructs developers to embed API keys directly in client-side HTML/JavaScript with a <script> tag containing key: "YOUR_API_KEY". The page does not reference the Gemini access risk or include any new security warnings despite the post-disclosure update.
Default key behavior: Google Cloud's own documentation at cloud.google.com/docs/authentication/api-keys confirms that by default, the "API restrictions" section has "Don't restrict key" selected, meaning an API key can be used to access any API that is enabled for the project. A notable contradiction exists: Google Cloud's API keys best practices page states "Do not embed API keys directly in code," while Firebase documentation simultaneously says embedding keys in code is safe.
Since May 2024, Firebase auto-created keys do receive API restrictions limited to Firebase-related APIs. However, keys created manually through the GCP Console still default to unrestricted, and any key explicitly set to "Don't restrict key" can access all enabled APIs including Gemini.
The result is that a developer reading the Gemini API key page would conclude their key is a sensitive credential that must never be exposed publicly. A developer reading the Firebase security checklist, which governs the exact same AIza... key format, would conclude the opposite. Both pages are authoritative Google documentation. Neither page references or cross-links the other's contradictory guidance. A developer who followed Firebase's instructions to embed keys in client code and later enabled Gemini on the same project would have no reason to revisit their key handling practices, because the Firebase documentation they originally followed has not changed.
Quokka Found 35,000 Keys Hardcoded in Android Apps
Quokka (formerly Kryptowire, rebranded September 12, 2022), a mobile security firm headquartered in McLean, Virginia, published complementary research on February 27, 2026, authored by Nikolaos Kiourtis. The blog post, titled "Hundreds of Thousands of Mobile Apps May Now Be Exposing AI Access," reported results from scanning 250,000 Android apps in Quokka's database.
The scan found that 39.5% of apps contained hardcoded Google API keys, yielding over 35,000 unique Google API keys in total. These keys are trivially extractable. An attacker can decompile an Android APK file and regex-match for the AIza prefix. The attack chain is then a single curl command to the Gemini /files/ endpoint returning 200 OK.
Quokka's analysis: "As AI capabilities get bolted onto existing platforms at speed, legacy credential architectures are being asked to do jobs they were never designed for. Keys that were safe under one set of platform capabilities become sensitive under another, and the gap between a platform's security posture and developers' understanding of it can widen faster than anyone notices."
Combined with Truffle Security's 2,863 web-exposed keys, the total known exposed key surface exceeds 37,000 unique keys. An important distinction: Truffle Security's 2,863 keys were individually verified as vulnerable by confirming 200 OK responses from the Gemini API. Quokka's 35,000 keys were extracted from apps but not individually tested against Gemini endpoints. A key is only vulnerable if the Generative Language API is enabled on its associated Google Cloud project. The confirmed-vulnerable count is 2,863; the 35,000 represents the potential but unverified attack surface from mobile apps alone.
The Chat & Ask AI Breach: 300 Million Messages Exposed Through the Same Pattern
The Firebase misconfiguration epidemic provides direct precedent for Google's "insecure by default" design philosophy.
In January 2026, independent security researcher Harry (@Harrris0n on X, of CovertLabs) discovered that Chat & Ask AI, an AI chatbot app by Turkish developer Codeway with over 50 million downloads, had left its Firebase Security Rules fully public (allow read: if true;). The breach exposed approximately 300 million messages from 25 million users, including complete chat histories with AI models, email addresses, phone numbers, and deeply personal conversations covering mental health, illegal activities, and financial details. The story was broken by 404 Media journalist Emanuel Maiberg on approximately January 29, 2026.
Harry built Firehound, an open-source automated scanning tool available on PyPI as firehound-scanner, which downloads iOS apps, extracts Firebase configurations, and systematically tests for misconfigurations. His scan of 200 popular iOS apps found that 103 (51.5%) had the same Firebase misconfiguration: public read/write access without authentication. The vx-underground security collective posted about the findings on January 19, 2026, calling it "the slopocalypse."
The connection to the API key vulnerability is structural. Both issues share insecure defaults (new API keys default to unrestricted; Firebase test mode leaves databases open), retroactive risk escalation (keys safe for years become dangerous when Gemini is enabled; Firebase misconfigurations that once exposed trivial data now expose AI conversations), and documentation that creates a false sense of security.
What It Costs to Exploit a Stolen Key
Current Gemini API pricing (from ai.google.dev/gemini-api/docs/pricing, last updated February 26, 2026) shows the most expensive model, Gemini 3.1 Pro Preview, charges $2.00 per million input tokens (up to 200K context) and $12.00 per million output tokens, rising to $4.00/$18.00 for prompts exceeding 200K tokens. Gemini 2.5 Pro charges $1.25/$10.00 per million tokens. Image generation via Nano Banana Pro costs $120 per million output tokens. Grounding with Google Search adds $14 to $35 per 1,000 queries on top of token costs.
There is no default hard spending cap on Google Cloud that automatically kills the service at a threshold. Billing alerts are notifications only. The $82,314.44 case demonstrates that roughly $41,000 per day is achievable in practice with a single stolen key.
Rate limits actually increase as cumulative spending grows. Paid Tier 1 (billing enabled) offers 300+ requests per minute and 1M+ tokens per minute. Tier 2 ($250 cumulative) and Tier 3 ($1,000 cumulative) offer progressively higher limits, creating a perverse feedback loop where more abuse enables faster abuse.
All Gemini models are accessible through a standard AIza... API key, including Gemini 3.1 Pro, Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, and their variants.
Enterprise Customers on Vertex AI Are Largely Unaffected. Individual Developers Are Not.
This vulnerability does not affect all Gemini users equally. There are two distinct access paths to Gemini: the Gemini Developer API (hosted at generativelanguage.googleapis.com) and Vertex AI (hosted at aiplatform.googleapis.com). They use different authentication mechanisms.
Enterprise customers accessing Gemini through Vertex AI typically authenticate via OAuth2 access tokens, Application Default Credentials (ADC), or service account JSON keys. The Vertex AI endpoint has historically rejected AIza... API keys entirely, returning 401 UNAUTHENTICATED with the message: "API keys are not supported by this API. Expected OAuth2 access token or other authentication credentials that assert a principal." Google's own Vertex AI documentation recommends API keys only for testing and ADC for production. This means Fortune 500 companies running Gemini through Vertex AI in production are largely insulated from the AIza... key exposure.
The Gemini Developer API, by contrast, authenticates entirely through AIza... keys. This is the access path used by individual developers, small teams, startups, students, and anyone who obtained a key through Google AI Studio. It is also the access path where Firebase and Maps keys can silently gain Gemini privileges.
This distinction explains the victim profile. The $82,314.44 case was a solo developer in Vietnam. The $55,444 case was a student. The €346 case was an individual user. These are not enterprise security failures. They are the consequences of an architecture that exposes individual developers and small teams to the full financial risk of a stolen credential, while enterprise customers on Vertex AI operate under a fundamentally different authentication model. Google's key management architecture, in practice, provides stronger security defaults to its largest customers and weaker defaults to its most vulnerable ones.
Gemini's Scale Magnifies the Attack Surface
Alphabet CEO Sundar Pichai disclosed Gemini's scale during the Q4 2025 earnings call on February 4, 2026, in prepared remarks published at blog.google and filed with the SEC.
Two verified numbers: Gemini processes over 10 billion tokens per minute via direct API use by customers (up from 7 billion in Q3 2025), and the Gemini App has grown to over 750 million monthly active users (up from 650 million in Q3 2025). In December 2025 alone, nearly 350 cloud customers each processed more than 100 billion tokens. Revenue from products built on generative AI models grew nearly 400% year-over-year in Q4, and Google Cloud revenue reached $17.7 billion (up 48% YoY). Alphabet guided 2026 capital expenditure of $175 to $185 billion.
This scale means even a small percentage of compromised keys translates into massive potential exposure.
John Hammond Reached 82,000+ Viewers in 72 Hours
Security researcher John Hammond (approximately 2.12 million YouTube subscribers) published a video covering Truffle Security's findings on February 26, 2026, the day after the disclosure. Hammond's tweet promoting it stated: "Google API keys didn't use to be considered 'secret,' so they're all over the web, but now they are an open door to Gemini."
The tweet received 9,103 views, 91 likes, 43 reposts, and 23 replies. The video reached 82,000+ views within 72 hours. Hammond demonstrated the attack chain: visit a website, open dev tools, copy the AIza key from a Maps embed, execute a single curl command, receive 200 OK from the Gemini API. A commenter on the video cited the $82,000 Reddit case.
Hacker News Called It Google's Worst Security Vulnerability Ever
The primary Hacker News discussion, "Google API keys weren't secrets, but then Gemini changed the rules" (news.ycombinator.com/item?id=47156925), reached 416 points and 92+ comments.
The most upvoted comment thread, by user 827a, called it potentially "the worst security vulnerability Google has ever pushed to prod," warning: "This is going to break so many applications. No wonder they don't want to admit this is a problem."
User brookst wrote: "How did this get past any kind of security review at all? It's like using usernames as passwords."
User devsda noted the fundamental contradiction: "There are no 'leaked' keys if Google hasn't been calling them a secret."
A former Google employee (refulgentis) expressed disbelief: "I can't wrap my mind around what is an API key... This must be some sort of overlooked fuckup."
Notable counterpoint from user decimalenough: the Gemini API is not enabled by default and must be explicitly enabled per project. The problem occurs when developers enable Gemini on the same project as public-facing keys. Their recommendation: "GCP projects are free and provide strong security boundaries, so use them liberally and never reuse them for anything public-facing."
Additional coverage came from Simon Willison (February 26, 2026), who framed the core issue: "What makes this a privilege escalation rather than a misconfiguration is the sequence of events." Malwarebytes compared it to password reuse but noted: "The difference with this instance is that this time it's been effectively baked in by design rather than chosen by users." Security expert Tim Erlin of Wallarm called it "a great example of how risk is dynamic, and how APIs can be over-permissioned after the fact."
Broad media coverage included BleepingComputer, The Hacker News, Cybernews, The Stack, CSO Online, CIO, Malwarebytes, Bitdefender, SC Media, Cyber Security News, Security Boulevard, PPC Land, Digital Watch Observatory, and DEV Community, all published between February 26 and March 1, 2026.
Conclusion: A Systemic Design Failure With No Complete Fix in Sight
This vulnerability is not a bug in the traditional sense. It is the predictable consequence of bolting sensitive AI authentication onto a key management architecture designed for public billing identifiers.
Three facts define the severity: 2,863 verified vulnerable keys on the web and up to 35,000 unverified keys in mobile apps constitute a known attack surface ranging from confirmed to potential; Google's own documentation actively contradicts itself, with the Gemini API key page calling keys passwords while Firebase documentation calls them non-secrets, as of March 2, 2026; and the absence of hard spending caps means a single compromised key can generate tens of thousands of dollars in charges before anyone notices.
Google's remediation addresses future key creation but does not retroactively fix the millions of keys already deployed in public code following Google's own guidance. The 90-day disclosure window closed without a root-cause fix. The Gemini API key page warns developers to treat keys like passwords, but the Firebase and Maps documentation that originally instructed developers to expose those same keys publicly remains substantively unchanged one week after disclosure. And the architecture that makes individual developers and small teams bear the full financial risk of this design flaw, while enterprise Vertex AI customers operate under a stronger authentication model, remains intact.
Frequently Asked Questions
What exactly happened?
Truffle Security discovered on February 25, 2026, that 2,863 Google API keys publicly embedded in websites (for services like Maps and Firebase) now silently authenticate to Google's Gemini AI API. This occurred because Google Cloud uses a single key format (AIza...) across all services. When the Gemini API is enabled on a Google Cloud project, every existing API key in that project gains Gemini access automatically, with no notification to the developer who originally deployed the key.
Is my Google API key affected?
Your key is affected if two conditions are true: (1) the Generative Language API (Gemini) is enabled on the same Google Cloud project that your API key belongs to, and (2) your API key is either unrestricted or explicitly includes the Generative Language API in its allowed services. You can check this in the Google Cloud Console under APIs & Services > Credentials for each of your projects.
How can I tell if the Generative Language API is enabled on my project?
Navigate to the Google Cloud Console, select your project, and go to APIs & Services > Enabled APIs & Services. Search for "Generative Language API." If it appears in the list, your project has Gemini enabled and all unrestricted API keys in that project can access Gemini endpoints.
What should I do immediately if I am affected?
First, rotate any exposed API keys immediately. Second, apply API restrictions to every key, scoping each to only the specific APIs it needs. Third, set up billing alerts and budget caps in the Google Cloud Console. Fourth, check whether any of your keys are embedded in public-facing code (websites, public repositories, mobile apps). Fifth, consider creating separate Google Cloud projects for public-facing services (Maps, Firebase) and private services (Gemini) to ensure strong security boundaries.
What data can an attacker access with a compromised key?
A valid key grants access to three endpoint categories: /files/ (uploaded datasets, documents, and images stored through the Gemini API), /cachedContents/ (cached context data including system prompts, large documents, and proprietary business logic), and /models (a listing of all available Gemini models). The attacker can also make billable Gemini API calls charged to the victim's account.
Can an attacker rack up charges on my account?
Yes. There is no default hard spending cap on Google Cloud. Billing alerts are notifications only, meaning they do not stop usage. The documented $82,314.44 case demonstrates that roughly $41,000 per day is achievable with a single stolen key. Rate limits increase as cumulative spending grows, creating a feedback loop where more abuse enables faster abuse.
Did Google fix the vulnerability?
Google has implemented partial mitigations: new keys created through AI Studio default to Gemini-only scope, leaked keys are being blocked from Gemini access, and proactive notifications are being deployed. However, the root-cause fix (an architectural separation between public-identifier keys and authentication-credential keys) was not deployed when the 90-day disclosure window expired on February 19, 2026. Google's Gemini-specific API key page (ai.google.dev/gemini-api/docs/api-key) does warn developers to treat keys like passwords and never expose them client-side, but the Firebase security checklist and Maps documentation still instruct developers to embed keys in client code as of March 2, 2026, creating contradictory guidance within Google's own documentation.
Why did Google initially dismiss this as "Intended Behavior"?
Google's initial triage on November 25, 2025, classified the report as "Intended Behavior," effectively dismissing it. Truffle Security pushed back with concrete evidence from Google's own infrastructure, including a key on a Google product website that authenticated to Gemini. Google reclassified to "Bug" on December 2, 2025, upgraded the severity, and began remediation.
How many keys are affected in total?
Truffle Security identified and individually verified 2,863 live vulnerable keys on public websites via the November 2025 Common Crawl dataset. Quokka identified over 35,000 unique Google API keys hardcoded in 250,000 Android apps (39.5% of all apps scanned), but these were extracted rather than individually verified against Gemini endpoints. A key is only vulnerable if the Generative Language API is enabled on its associated project, so the confirmed-vulnerable count is 2,863 and the potential-but-unverified surface from mobile apps is 35,000. The actual number of vulnerable keys is likely higher than 2,863, as these scans represent only a subset of all deployed keys.
Are enterprise customers on Vertex AI affected?
Enterprise customers accessing Gemini through Vertex AI (aiplatform.googleapis.com) typically authenticate via OAuth2 access tokens, Application Default Credentials, or service account JSON keys, not AIza... API keys. Google's Vertex AI documentation recommends API keys only for testing and ADC for production. The vulnerability primarily impacts developers using the Gemini Developer API (generativelanguage.googleapis.com), which authenticates through AIza... keys. This includes individual developers, small teams, startups, and students who obtained keys through Google AI Studio.
Does this affect Firebase keys specifically?
Yes. Firebase API keys use the same AIza... format as all other Google Cloud API keys. If a Firebase key belongs to a Google Cloud project where the Generative Language API is enabled, and if the key is unrestricted (which is the default for manually created keys), then that Firebase key can access Gemini endpoints. Since May 2024, Firebase auto-created keys receive restrictions limited to Firebase APIs, but keys created before that date, or created manually through the GCP Console, may still be unrestricted.
Is this the same issue as the Chat & Ask AI data breach?
They are separate incidents but share the same root cause: Google's "insecure by default" design philosophy. The Chat & Ask AI breach (January 2026) exposed 300 million messages from 25 million users due to a Firebase misconfiguration (public read/write access without authentication). The API key vulnerability is a privilege escalation where keys that were safe for years became dangerous when Gemini was enabled. Both issues stem from insecure defaults, retroactive risk escalation, and documentation that creates a false sense of security.
Has Google waived the $82,314.44 charges for the affected developer?
As of the reporting by Boing Boing on February 27, 2026, Google has not forgiven the charges. The developer in Vietnam, whose normal monthly spend was $180, reported facing bankruptcy. A separate case involving a student billed $55,444 after exposing a key on GitHub was waived by Google, per CIO, but that appears to be an exception rather than a standard policy.
Where can I read Truffle Security's original disclosure?
The full disclosure is published at trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules, dated February 25, 2026.
Sources
- Truffle Security, "Google API Keys Weren't Secrets. But then Gemini Changed the Rules." February 25, 2026.
- BleepingComputer, "Previously harmless Google API keys now expose Gemini AI data." February 26, 2026.
- The Hacker News, "Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement." February 28, 2026.
- Boing Boing, "Stolen Gemini API key racks up $82,000 in 48 hours for solo dev." February 27, 2026.
- Quokka, "Google API Keys Now A Security Risk Thanks to Gemini." February 27, 2026.
- Cybernews, "Thousands of exposed Google API keys are now a ticking bomb for attackers exploiting Gemini." February 2026.
- The Stack, "Google created a Gemini vulnerability via API keys: report." February 2026.
- CSO Online / CIO, "'Silent' Google API key change exposed Gemini AI data." February 2026.
- Malwarebytes, "Public Google API keys can be used to expose Gemini AI data." February 27, 2026.
- Malwarebytes, "AI chat app leak exposes 300 million messages tied to 25 million users." February 2026.
- Google Firebase Security Checklist, firebase.google.com/support/guides/security-checklist
- Google Firebase API Keys, firebase.google.com/docs/projects/api-keys
- Google Gemini API Keys, ai.google.dev/gemini-api/docs/api-key
- Google Gemini API Troubleshooting, ai.google.dev/gemini-api/docs/troubleshooting
- Google Vertex AI Authentication, docs.cloud.google.com/vertex-ai/generative-ai/docs/migrate/openai/auth-and-credentials
- Google Vertex AI API Keys, docs.cloud.google.com/vertex-ai/generative-ai/docs/start/api-keys
- Google Gemini API Pricing, ai.google.dev/gemini-api/docs/pricing
- Alphabet Q4 2025 Earnings, blog.google/company-news/inside-google/message-ceo/alphabet-earnings-q4-2025/
- John Hammond on X, x.com/_JohnHammond/status/2027069413137002899
- Hacker News Discussion, news.ycombinator.com/item?id=47156925
- Simon Willison, simonwillison.net/2026/Feb/26/google-api-keys/