“`html
A significant privilege elevation flaw impacting Google Cloud API keys reveals that legacy public-facing keys now unobtrusively provide unauthorized access to Google’s Gemini AI endpoints, compromising private files, stored data, and billable AI usage to adversaries.
For more than ten years, Google clearly advised developers to integrate API keys formatted as AIza... strings directly into client-side HTML and JavaScript.
Firebase’s formal security checklist claimed that API keys are not confidential, while Google Maps documentation directed developers to publicly paste keys on web pages. These keys functioned as project identifiers for billing rather than as authentication tokens. Such guidance is now perilously obsolete.
Upon enabling the Gemini API (Generative Language API) on a Google Cloud project, every existing API key within that project subtly inherits access to sensitive Gemini endpoints — without any alert, confirmation dialog, or email notification to the developer.

A key issued three years ago for a Maps integration can abruptly transform into a live credential capable of accessing uploaded AI files, cached context, and billable inference services overnight.
Investigators at Truffle Security have uncovered why this scenario signifies privilege elevation rather than merely a misconfiguration. The crucial factor is the sequence of actions. A developer adhered to Google’s advice by embedding a Maps API key in public JavaScript. Subsequently, another team member activated the Gemini API on the same cloud project.
The public key instantaneously obtained access to sensitive Gemini endpoints, and the original developer received no notification.
The roots of the vulnerability stem from two recognized flaws: CWE-1188 (Insecure Default Initialization) and CWE-269 (Incorrect Privilege Assignment). By default, newly created API keys in Google Cloud are designated as “Unrestricted,” implying they can access every enabled API in the project from the moment they are initiated, including Gemini.
What Attackers Can Accomplish
The assault necessitates zero infrastructure accessibility. An attacker visits a public web page, extracts the AIza... key from the source code, and queries the Gemini API directly:
textcurl "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"
A successful response yields a 200 OK, granting access to:
- Private files and cached data — The
/files/and/cachedContents/endpoints can reveal uploaded datasets, documents, and preserved AI context. - Financial harm — Attackers may deplete Gemini API quotas or rack up thousands of dollars in daily charges against the victim’s billing account.
- Service interruption — Depleting quotas can potentially immobilize legitimate services powered by Gemini.
Truffle Security analyzed the November 2025 Common Crawl dataset, a ~700 TiB collection of publicly scraped online content, and identified 2,863 active Google API keys vulnerable to this avenue.
Organizations impacted included prominent financial entities, security companies, global recruiting agencies, and Google itself. Investigators confirmed that at least one key embedded on a Google product website since February 2023, predating the existence of Gemini, had quietly acquired full entry to Gemini’s model endpoints.

Google has laid out a remediation strategy that encompasses scoped defaults for AI Studio keys (Gemini-only access by default), automated obstruction of leaked keys found in the environment, and proactive alerts to developers upon discovering exposed keys.
Nonetheless, researchers observe that as of the disclosure date, the root-cause solution was still underway, and no confirmation of a comprehensive architectural fix had been provided.
What Developers Should Implement Now
Organizations utilizing any Google Cloud service, such as Maps, Firebase, YouTube Data API, or others, must take prompt action:
- Examine all GCP projects — Navigate to APIs & Services > Enabled APIs and verify the presence of the “Generative Language API” across each project.
- Review API key setups — Highlight any unrestricted keys or those that explicitly allow the Generative Language API.
- Ensure no keys are public — Search client-side JavaScript, public repositories, and CI/CD pipelines for any exposed
AIza...strings. - Rotate all exposed keys immediately — Prioritize older keys deployed under the previous “keys are safe to share” guidance.
- Utilize TruffleHog for scanning — Execute
trufflehog filesystem /path/to/your/code --only-verifiedto identify active, verified Gemini-accessible keys in codebases.
The pattern revealed here, of public identifiers silently acquiring sensitive AI privileges, is unlikely to be exclusive to Google. As AI functionalities are integrated into existing platforms across the sector, the attack surface for legacy credentials will continue to broaden in unforeseen ways.
“`