google

Google Told Developers to Paste API Keys in Public for a Decade. Gemini Just Made That Catastrophic.

Google Told Developers to Paste API Keys in Public for a Decade. Gemini Just Made That Catastrophic.

For over ten years, Google's own documentation told developers that API keys starting with AIza were safe to include in public-facing code. Firebase's official security checklist said it explicitly: "API keys are not secrets." Google Maps guides instructed developers to paste their keys directly into HTML. Millions of websites followed that guidance and embedded those keys in JavaScript that anyone could read by right-clicking "View Page Source."

Then Google launched Gemini. And without telling anyone, every one of those public keys quietly became a backdoor into private AI endpoints.

Security researchers at Truffle Security published their findings on February 26, revealing that nearly 3,000 live Google API keys (deployed publicly following Google's own instructions) now authenticate to Gemini's most sensitive endpoints. An attacker needs no special skills or infrastructure to exploit this. They visit your website, copy the key from your page source, and start running up your bill.

One Reddit user woke up to find $82,314.44 in charges on their Google Cloud account after a key was stolen and abused between February 11 and 12. Their usual monthly spend was $180.

What Actually Happened

The problem is architectural, and understanding it matters for anyone who has ever built anything on Google Cloud.

Google API keys ( formatted as AIza... strings ) are project-scoped, not service-scoped. When you create a key for Google Maps, that key belongs to your Google Cloud project. It does not inherently have access to any specific service, it just belongs to the project. For years, this was fine. Google Maps keys were billing identifiers. If someone scraped your key from your website, the worst they could do was use your Maps quota, which was largely free anyway.

Then the Gemini API arrived. When the Gemini API (formally called the Generative Language API) is enabled on a Google Cloud project, every existing API key in that project silently inherits access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.

This means a developer who embedded a Maps key in their website in 2021, following Google's documented instructions, may have unknowingly created a live Gemini credential the moment a teammate enabled the Generative Language API for a prototype last year. The key never changed. The code never changed. But the permissions attached to that key changed completely and nobody was told.

The attack itself is trivial. An attacker visits your website, views the page source, copies your AIza... key, and runs a basic API call to https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY. Instead of a 403 Forbidden, they get a 200 OK.

From there they can access private uploaded files and cached Gemini content, exhaust your API quotas shutting down your legitimate services, and run up AI usage bills worth thousands of dollars per day against your billing account.

The Scale of the Exposure

Truffle Security did not find these keys by hacking anything. They scanned the November 2025 Common Crawl dataset (a roughly 700 terabyte archive of publicly scraped web content) and identified 2,863 live Google API keys vulnerable to this vector.

Common Crawl is a publicly available dataset that anyone can download. Security researchers use it. So do the threat actors hunting for exposed credentials at scale.

The 2,863 keys were not all small side projects. Truffle's scan turned up keys belonging to major financial institutions, security companies, global recruiting firms, and — in a detail that borders on parody — Google itself.

For Kenyan developers and organisations, the exposure surface is real. Any project that uses Google Maps, Firebase Authentication, Cloud SQL, or any other Google service that historically treated AIza keys as public identifiers, and that has since added Gemini to the same Google Cloud project, is potentially affected. Firebase authentication in particular is extremely common in Kenyan-built apps, it is the default auth provider for many startups precisely because Google marketed it as safe and simple.

Google's Response And Its Limits

Truffle Security disclosed the vulnerability to Google on November 21, 2025. Google initially classified the behaviour as "Intended Behavior", meaning they considered it working as designed, not a bug. After researchers provided concrete evidence from Google's own infrastructure, the security team took it more seriously.

On January 13, 2026, Google classified it as "Single-Service Privilege Escalation, READ", a Tier 1 vulnerability. By February 2, 2026, Google confirmed the team was still working on the root-cause fix. The 90-day disclosure window closed on February 19, 2026, with the root-cause fix still in progress.

When Truffle published their findings publicly on February 26, Google issued a statement acknowledging the report and confirming they had implemented measures to detect and block leaked API keys attempting to access the Gemini API. Google has also committed to three forward-looking changes: new AI Studio keys will default to Gemini-only scope, leaked keys will be proactively blocked from Gemini access, and notifications will go out when leaks are detected.

What remains unconfirmed is whether Google has individually notified all 2,863 identified affected project owners. Given that many of those developers may not follow security news closely (they built a Maps embed years ago and moved on) direct notification is arguably the most important step. Google has not confirmed it has happened.

The root-cause fix, preventing Gemini API activation from silently upgrading existing project keys, is still not complete as of publication.

How to Check If You Are Affected

This is the section that matters most if you have built anything on Google Cloud or Firebase.

Step 1: Check if Gemini is enabled on your project

Go to the Google Cloud Console at console.cloud.google.com. Select your project from the top navigation. Go to APIs & Services → Enabled APIs & Services. Look for "Generative Language API" in the list. If it is there and you did not intentionally enable it, or you enabled it for a brief test and forgot about it, you are potentially affected.

Step 2: Audit your API keys

Go to APIs & Services → Credentials. Every key listed there that exists in the project has potentially inherited Gemini access if the Generative Language API is enabled. Look at each key and check its restrictions.

Step 3: Apply restrictions immediately

Click on each key. Under "API restrictions," switch from "Don't restrict key" to "Restrict key." Select only the APIs that key actually needs to function — Google Maps JavaScript API for a Maps key, Firebase-related APIs for a Firebase key. Do not include the Generative Language API unless that specific key is genuinely intended for Gemini access.

51965.jpg

If you have a key that is dedicated to Gemini usage, keep it separate, restricted to the Generative Language API only, and never embed it in client-side code. Gemini API keys should only ever exist in server-side environments where they cannot be read from page source.

Step 4: Rotate any key that may have been publicly exposed

If a key has ever appeared in a public GitHub repository, in client-side JavaScript on a public website, or in any code that was accessible without authentication, rotate it. Generating a new key in the console invalidates the old one. Then audit your codebase to replace the old key everywhere it appears before the new one goes live.

Step 5: Scan your codebase

Tools like TruffleHog can scan codebases and CI/CD pipelines to identify live Google API keys with Gemini access, not just pattern-match for the key format, but verify whether those keys are actually live and exploitable. Run it across your repositories before assuming you are clean.

The Broader Lesson This Should Teach

The specific vulnerability will be patched. Google will finish the root-cause fix, new keys will default to scoped access, and the 2,863 exposed keys have already had their Gemini access revoked by Google. For those specific keys, the immediate risk is addressed.

But the pattern this exposes is not going away. A credential that was safe by design in 2022 may be a serious liability in 2026, not because anyone made a mistake, but because the rules quietly changed.

This is what happens when AI capabilities are bolted onto existing platforms at speed. The Gemini API inherited a key management architecture built for a billing-identifier model. Nobody sat down and asked: "If we enable this new service on all existing projects, what permissions do existing keys inherit?" Or if someone did ask, the answer did not make it into the release process. The result is that following Google's explicitly documented, decade-old guidance created a security vulnerability that developers had no way to anticipate.

For Kenyan developers building on Google Cloud and Firebase (which is a significant portion of the startup ecosystem here) the actionable response is not to stop using Google services. It is to treat credential hygiene as a first-class concern rather than something you address when something goes wrong. The $82,314 overnight bill and the student's $55,444 charge are not edge cases. They are previews of what happens when API key discipline is treated as optional.

Restrict your keys to what they actually need. Keep Gemini credentials server-side only. Audit your projects quarterly. And the next time a platform you build on launches a major new service, check what permissions your existing credentials just quietly inherited.

Comments

to join the discussion.