Executive Summary
In January 2026, researchers at Miggo Security demonstrated a semantic prompt injection attack against Google Gemini, the company’s flagship AI assistant integrated across Google Workspace. By sending a maliciously crafted Calendar invitation containing natural-language instructions in the event’s description, attackers could leverage Gemini’s automated parsing and task execution to exfiltrate sensitive calendar data. When a victim queried Gemini about their schedule, the model would follow the embedded instructions, summarize all meetings—including private ones—and leak the information by generating a new event visible to the attacker. This bypassed existing filtering mechanisms and exposed data without explicit user approval.
The incident underscores the rising concern over generative AI systems’ susceptibility to context-driven prompt injection and logic abuse. It highlights an urgent need for context-aware and semantic-level defenses in AI-integrated business applications, as AI assistants become deeply embedded in productivity suites throughout the enterprise sector.
Why This Matters Now
As organizations increasingly rely on AI assistants for sensitive workflows, semantic prompt injection attacks expose a critical security gap with immediate real-world impact. The evolving sophistication of natural-language exploits bypasses conventional threat detection, making it urgent to adapt controls and monitoring for AI-driven workflows before widespread adoption presents systemic risk.
Attack Path Analysis
The attack began when an adversary sent a Google Calendar invite containing a crafted event description using prompt injection to target Gemini’s LLM. No technical exploit or credential theft occurred, but Gemini’s routine operation later processed the attacker’s prompt, escalating the model's access to private meeting data. Lateral movement was not required, as Gemini’s integrated access provided exposure. The attacker maintained control by pre-seeding the event payload, awaiting victim interaction. Once triggered, Gemini extracted and summarized private calendar data into a new event, which became accessible to the attacker, resulting in the exfiltration of sensitive information and potential privacy or reputational impact.
Kill Chain Progression
Initial Compromise
Description
An attacker delivers a malicious Google Calendar invite with a prompt injection payload embedded in the event’s description field, leveraging Gemini LLM's data ingestion upon user interaction.
MITRE ATT&CK® Techniques
Data Manipulation: Stored Data Manipulation
Exploitation for Defense Evasion
Steal Web Session Cookie
User Execution: Malicious File
Access Token Manipulation
Account Discovery: Email Account
Input Capture: Credential API Hooking
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Protect stored account data
Control ID: 3.1
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA (Digital Operational Resilience Act) – Protection and Prevention
Control ID: Art. 9(2)
CISA ZTMM 2.0 – Data Loss Prevention
Control ID: Pillar: Data — Data Loss Prevention (DLP)
NIS2 Directive – Cybersecurity Risk Management Measures
Control ID: Article 21
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI prompt injection vulnerabilities in calendar systems expose enterprise software to data exfiltration through malicious natural language instructions and cloud-native security fabric gaps.
Information Technology/IT
Gemini AI assistant compromises threaten IT infrastructure through egress security policy enforcement failures and multicloud visibility control weaknesses in enterprise environments.
Financial Services
Calendar data leakage via AI manipulation poses significant compliance risks under NIST frameworks, threatening sensitive financial communications and client confidentiality requirements.
Health Care / Life Sciences
AI-driven calendar invite attacks compromise HIPAA compliance through unauthorized access to protected health information and encrypted traffic security vulnerabilities in healthcare systems.
Sources
- Gemini AI assistant tricked into leaking Google Calendar datahttps://www.bleepingcomputer.com/news/security/gemini-ai-assistant-tricked-into-leaking-google-calendar-data/Verified
- Researchers design 'promptware' attack with Google Calendar to turn Gemini evilhttps://arstechnica.com/google/2025/08/researchers-use-calendar-events-to-hack-gemini-control-smart-home-gadgets/Verified
- Google fixes GeminiJack zero-click exposing corporate Gmail, Calendar invites, shared Docshttps://cybernews.com/security/google-geminijack-zero-click-flaw-leaks-corporate-gmail-calendar-docs/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
CNSF-aligned controls such as Zero Trust Segmentation, multicloud visibility, identity-based policies, and egress enforcement would restrict opportunities for untrusted prompts to traverse trust boundaries, detect anomalous SaaS behaviors, and prevent or alert on data exfiltration, even in scenarios where infrastructure is not directly compromised.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: Inline policy enforcement could have detected or blocked suspicious prompt patterns.
Control: Zero Trust Segmentation
Mitigation: Granular identity-based segmentation would have minimized the scope of data accessible by Gemini based on intended use.
Control: East-West Traffic Security
Mitigation: Policy controls restrict or log cross-application data flows, limiting internal exposure.
Control: Multicloud Visibility & Control
Mitigation: Centralized visibility and automated detection would alert on anomalous or repeated model-driven export attempts.
Control: Egress Security & Policy Enforcement
Mitigation: Outbound data flows are filtered and blocked if unauthorized or anomalous.
Incident detection enables timely remediation before further damage.
Impact at a Glance
Affected Business Functions
- Scheduling
- Communication
- Data Management
Estimated downtime: 3 days
Estimated loss: $500,000
Unauthorized access to sensitive calendar data, including private meeting details and potentially confidential information, leading to privacy breaches and potential regulatory penalties.
Recommended Actions
Key Takeaways & Next Steps
- • Implement CNSF-aligned Zero Trust Segmentation to restrict AI agent access to sensitive SaaS data using identity and context-based policies.
- • Enable multicloud visibility and anomaly detection to baseline typical AI/automation behaviors and alert on unsanctioned data flows.
- • Apply inline policy enforcement capable of prompt-aware filtering to mitigate future prompt injection and logic abuse attacks.
- • Strengthen egress controls to prevent SaaS or agent-driven exfiltration through indirect channels, including calendar events and descriptions.
- • Regularly review AI and SaaS integrations for cross-context data exposures, and test defenses against natural language–based semantic attacks.



