Google Gemini AI Vulnerabilities Expose Users to Prompt Injection and Cloud Exploits

Date: September 30, 2025Category: Artificial Intelligence / Cybersecurity

Cybersecurity researchers have recently disclosed a series of critical vulnerabilities in Google's Gemini AI assistant that could have put users at significant risk of privacy breaches and data theft. Dubbed the “Gemini Trifecta,” these flaws impacted three distinct components of the Gemini AI suite and have now been patched.

The Gemini Trifecta Vulnerabilities

According to Tenable security researcher Liv Matan, the vulnerabilities consisted of:

  • Prompt Injection Flaw in Gemini Cloud Assist: Attackers could exploit the tool’s log summarization capabilities to compromise cloud-based resources. By embedding malicious prompts in HTTP request headers, such as the User-Agent, attackers could target services including Cloud Functions, Cloud Run, App Engine, Compute Engine, Cloud Endpoints, and various APIs like Cloud Asset and Cloud Monitoring. This flaw potentially allowed attackers to exfiltrate sensitive data or query assets without user awareness.
  • Search-Injection Flaw in Gemini Search Personalization: This vulnerability could enable attackers to manipulate a user’s search history and inject malicious prompts into the AI chatbot’s interactions. By leveraging JavaScript to poison Chrome search histories, attackers could trick Gemini into revealing saved information and location data, bypassing the model’s inability to distinguish between legitimate user queries and injected instructions.
  • Indirect Prompt Injection Flaw in Gemini Browsing Tool: Attackers could exploit Gemini’s web page summarization feature to exfiltrate sensitive information to external servers. By crafting prompts within web content, threat actors could instruct Gemini to transmit private data without requiring any rendered links or images.

How These Attacks Could Work

For example, the Cloud Assist flaw could be abused to instruct Gemini to query public assets or check for IAM misconfigurations, embedding sensitive data in hyperlinks for the attacker. The search-injection flaw relied on poisoning a user’s browsing history with malicious prompts, which would then be executed when interacting with Gemini.

Tenable notes that these vulnerabilities underscore a critical shift: AI tools themselves can become the attack vehicle, not just the target. Organizations adopting AI must consider these risks and implement strict policies and monitoring to secure AI deployments.

Google’s Response

Following responsible disclosure, Google has taken measures to prevent exploitation:

  • Hyperlinks are no longer rendered in log summarization responses.
  • Additional hardening measures have been applied to safeguard against prompt injections.

Broader Implications

This revelation follows a similar incident reported by CodeIntegrity, which demonstrated that Notion's AI agents could be tricked into exfiltrating confidential data using hidden instructions in a PDF. The report highlights a growing concern in AI security: agents with broad workspace access can perform chained, multi-step tasks across documents, databases, and external connectors in ways traditional access controls never anticipated.