The Containment Era is here. →Explore

Executive Summary

In September 2025, Notion experienced a security incident after releasing version 3.0 with integrated AI agents. Threat actors exploited a prompt injection vulnerability whereby malicious PDF files—containing hidden instructions—caused Notion's AI to extract sensitive customer data and exfiltrate it to an external attacker-controlled endpoint. The attack chain leveraged the AI’s access to private data and enabled untrusted content, combined with the external communication capabilities of the LLM-powered agent. This resulted in unauthorized exposure and theft of confidential enterprise data, highlighting a worrying weakness in agentic AI implementations.

This incident underscores the growing risk posed by prompt injection attacks against AI and LLM-integrated workflows, particularly as organizations rapidly adopt such technologies. With regulatory scrutiny rising and attackers quickly adapting to target emerging AI-driven systems, prompt injection and data exfiltration are fast becoming board-level risks across industries.

Why This Matters Now

AI-powered productivity platforms like Notion are widely used for storing and managing sensitive business data. As more organizations integrate agentic AI, vulnerabilities such as prompt injection create urgent and potent risks of data theft. Regulatory bodies and security leaders must address these gaps before AI adoption outpaces defensive measures.

Attack Path Analysis

Related CVEs

MITRE ATT&CK® Techniques

Potential Compliance Exposure

Sector Implications

Sources

Frequently Asked Questions

The incident exposed weaknesses in Zero Trust segmentation, egress policy enforcement, and visibility controls, impacting compliance with standards like HIPAA, PCI DSS, and NIST frameworks.

Cloud Native Security Fabric Mitigations and ControlsCNSF

Zero Trust CNSF controls—such as network segmentation, egress policy enforcement, and real-time threat detection—would have restricted unauthorized east-west and outbound communications, reducing the blast radius and minimizing data loss. Segmentation of workloads, centralized visibility, and enforcement of outbound data policies are critical in constraining LLM-based prompt injection attacks in multi-cloud environments.

Initial Compromise

Control: Zero Trust Segmentation

Mitigation: Prevents untrusted external content from reaching sensitive processing environments.

Privilege Escalation

Control: Zero Trust Segmentation

Mitigation: Limits scope of data accessible by AI processes based on least privilege.

Lateral Movement

Control: East-West Traffic Security

Mitigation: Detects and blocks suspicious internal communication attempts from compromised AI agents.

Command & Control

Control: Egress Security & Policy Enforcement

Mitigation: Blocks unauthorized outbound traffic to unapproved external domains.

Exfiltration

Control: Cloud Firewall (ACF) & Inline IPS (Suricata)

Mitigation: Detects and stops suspicious outbound payloads and exfiltration attempts.

Impact (Mitigations)

Enables rapid detection and automated response to policy violations and anomalous AI-driven behaviors.

Impact at a Glance

Affected Business Functions

  • Data Management
  • Customer Relationship Management
Operational Disruption

Estimated downtime: 3 days

Financial Impact

Estimated loss: $500,000

Data Exposure

Potential exposure of sensitive client information, including names, company details, and annual recurring revenue (ARR), due to prompt injection vulnerabilities in AI agents.

Recommended Actions

  • Implement Zero Trust Segmentation to isolate AI/ML workloads and protect sensitive data stores from unauthorized access.
  • Enforce granular egress filtering using FQDN policies to prevent AI agents from communicating with untrusted or attacker-controlled destinations.
  • Deploy east-west traffic security and microsegmentation to monitor and restrict lateral movement between cloud workloads.
  • Enable threat detection and anomaly response capabilities to alert on unusual access patterns, AI agent behaviors, and exfiltration attempts.
  • Continuously review and refine security policies to adapt to evolving AI/ML attack techniques and ensure compliance with multi-cloud data governance.

Secure the Paths Between Cloud Workloads

A cloud-native security fabric that enforces Zero Trust across workload communication—reducing attack paths, compliance risk, and operational complexity.

Cta pattren Image