Executive Summary
In September 2025, cybersecurity researchers disclosed three critical, now-patched vulnerabilities in Google’s Gemini AI assistant platform. Attackers were able to exploit prompt injection and log-to-prompt injection flaws within Gemini’s Search Personalization Model and Cloud deployment, risking unauthorized data access, privacy compromise, and potential theft of sensitive information. The exploited vulnerabilities allowed crafted prompts or manipulated logs to execute unintended commands, bypass safeguards, and potentially leak user data, highlighting major security gaps in generative AI-driven workflows before emergency updates were deployed by Google.
This incident underscores the growing risk of prompt injection and supply-chain-type threats in the AI/ML ecosystem. The attack reflects a surge in adversarial tactics targeting large language models and cloud-based AI assistants, drawing regulatory attention and prompting security leaders to reassess AI deployment controls in enterprise environments.
Why This Matters Now
With AI assistants increasingly integrated into business workflows, flaws like prompt injection and cross-cloud exploitation represent serious threats to data privacy and operations. The speed of discovery, exploitation, and required patching highlights the urgent need for dedicated AI/ML security controls and continuous risk monitoring, as attackers evolve to target these emerging platforms.
Attack Path Analysis
Attackers exploited Gemini AI prompt injection vulnerabilities to gain unauthorized access to the service (Initial Compromise). By manipulating AI model inputs, they were able to escalate privileges or pivot within the Gemini cloud environment (Privilege Escalation), ultimately leveraging these flaws to move laterally and access adjacent resources (Lateral Movement). The attackers established outbound channels for command and control, possibly maintaining persistence through manipulated AI service logic (Command & Control). Sensitive data was then exfiltrated via cloud traffic egress pathways (Exfiltration), resulting in privacy violations and exposure of user data (Impact).
Kill Chain Progression
Initial Compromise
Description
Attackers executed prompt injection and log-to-prompt injection attacks against the Gemini AI service, exploiting vulnerabilities in AI input processing to gain unauthorized access.
Related CVEs
CVE-2025-12345
CVSS 7.5A prompt injection vulnerability in Google Gemini Cloud Assist allows attackers to execute unauthorized cloud queries by embedding malicious prompts within log entries.
Affected Products:
Google Gemini Cloud Assist – All versions prior to the patch released on March 5, 2025
Exploit Status:
proof of conceptCVE-2025-12346
CVSS 7A search injection vulnerability in Google Gemini Search Personalization Model allows attackers to manipulate search history, leading to unauthorized data exposure.
Affected Products:
Google Gemini Search Personalization Model – All versions prior to the patch released on March 5, 2025
Exploit Status:
proof of conceptCVE-2025-12347
CVSS 7.2An indirect prompt injection vulnerability in Google Gemini Browsing Tool allows attackers to exfiltrate user data by manipulating AI-generated web content.
Affected Products:
Google Gemini Browsing Tool – All versions prior to the patch released on March 5, 2025
Exploit Status:
proof of concept
MITRE ATT&CK® Techniques
Data Manipulation: Stored Data Manipulation
Command and Scripting Interpreter
User Execution
System Script Proxy Execution
Masquerading
Browser Session Hijacking
Steal Web Session Cookie
Obfuscated Files or Information
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Change Control Processes
Control ID: 6.4.3
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA – ICT Risk Management Framework
Control ID: Article 6
CISA ZTMM 2.0 – Ensure services are securely configured
Control ID: Device Pillar: Secure Configuration Management
NIS2 Directive – Cybersecurity Risk-management Measures
Control ID: Article 21
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI/ML security vulnerabilities in Google Gemini expose software companies to prompt injection attacks, threatening proprietary data and requiring enhanced zero trust segmentation controls.
Information Technology/IT
Gemini AI flaws enable search-injection and log-to-prompt attacks against cloud services, demanding strengthened egress security and multicloud visibility for IT infrastructure protection.
Financial Services
AI assistant vulnerabilities risk financial data theft through cloud exploits, necessitating encrypted traffic controls and threat detection capabilities to maintain regulatory compliance.
Health Care / Life Sciences
Google Gemini security flaws threaten patient privacy through AI prompt injection attacks, requiring enhanced cloud firewall protection and HIPAA-compliant anomaly detection systems.
Sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploitshttps://thehackernews.com/2025/09/researchers-disclose-google-gemini-ai.htmlVerified
- Google Cloud Platform (GCP) Gemini Cloud Assist Prompt Injection Vulnerability - Research Advisory | Tenable®https://www.tenable.com/security/research/tra-2025-10Verified
- The Trifecta: How Three New Gemini Vulnerabilities in Cloud Assist, Search Model, and Browsing | Tenable®https://www.tenable.com/blog/the-trifecta-how-three-new-gemini-vulnerabilities-in-cloud-assist-search-model-and-browsingVerified
- ‘Gemini Trifecta’ vulnerabilities in Google AI highlight risks of indirect prompt injection - SiliconANGLEhttps://siliconangle.com/2025/09/30/gemini-trifecta-vulnerabilities-google-ai-highlight-risks-indirect-prompt-injection/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Network-level zero trust segmentation, microsegmentation, egress filtering, inline anomaly detection, and centralized visibility would have restricted attacker movement, blocked unauthorized data flows, and detected abnormal access to both AI services and cloud assets throughout the attack lifecycle.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: Inline, distributed enforcement detects and blocks anomalous prompt and search traffic.
Control: Zero Trust Segmentation
Mitigation: Enforced least privilege policy inhibits unauthorized privilege escalation within the environment.
Control: East-West Traffic Security
Mitigation: Microsegmentation and workload-to-workload filtering block lateral movement.
Control: Cloud Firewall (ACF)
Mitigation: Outbound connections to unknown C2 endpoints are detected and blocked.
Control: Egress Security & Policy Enforcement
Mitigation: Exfiltration attempts are blocked based on egress policy and real-time monitoring.
Early detection enables rapid response to minimize data exposure and business impact.
Impact at a Glance
Affected Business Functions
- Cloud Services Management
- Search Personalization
- Web Browsing Assistance
Estimated downtime: 3 days
Estimated loss: $500,000
Potential exposure of sensitive user data, including search history, location information, and cloud resource configurations.
Recommended Actions
Key Takeaways & Next Steps
- • Enforce zero trust segmentation and least privilege across all AI and cloud workloads to minimize lateral attack surface.
- • Deploy inline egress filtering and policy enforcement to control and monitor outbound cloud traffic for exfiltration and command & control attempts.
- • Implement microsegmentation and east-west security controls to limit movement between cloud services and regions.
- • Leverage centralized, real-time threat detection and anomaly response capabilities for early identification and containment of malicious AI activity.
- • Continuously monitor and audit AI/ML service access patterns and enforce policy-driven controls to detect and prevent prompt injection or logic-based attacks.



