Executive Summary
In January 2026, two widely used Chrome extensions marketed as AI workflow assistants were discovered stealing ChatGPT and DeepSeek chat data from over 900,000 users. Threat actors leveraged the popularity of generative AI tools, distributing the malicious extensions through official and third-party repositories. Once installed, these extensions exfiltrated sensitive conversations and user data by intercepting traffic and bypassing standard browser security controls. The incident revealed significant vulnerabilities in the supply chain of browser add-ons and highlighted the ease with which infostealers can abuse trust in AI-powered productivity tools. Organizations and individuals relying on browser-based AI helpers were left exposed, with the compromised data raising regulatory and reputational concerns.
This breach underscores a growing trend where attackers target the workflows and integrations surrounding AI, rather than the AI models themselves. The rise in infostealers embedded within productivity tools calls for urgent improvements to extension vetting, zero trust segmentation, and East-West traffic security.
Why This Matters Now
As organizations rush to deploy AI tools and browser extensions into employee workflows, threat actors are rapidly exploiting decentralized and poorly governed workflows to harvest sensitive data at scale. Immediate focus must shift from model security to the broader attack surface of workflow integrations, where zero trust and granular policy enforcement are often lacking.
Attack Path Analysis
Attackers initiated compromise by distributing malicious Chrome extensions masquerading as AI helpers, which were installed by workflow users. Escalation occurred as these extensions gained unauthorized access to browser session data. The adversary possibly leveraged east-west access or internal APIs to move laterally and target more sensitive cloud workflow assets. Command and control was maintained through regular outbound communication from the extensions to remote attacker infrastructure. The infostealer exfiltrated sensitive chat data and potentially other cloud-accessed information. Ultimately, the impact was the large-scale theft of organizational AI workflow data, risking data loss and privacy breaches.
Kill Chain Progression
Initial Compromise
Description
Malicious Chrome extensions posing as AI helpers were distributed and installed by users, providing attackers with a foot in workflow environments.
MITRE ATT&CK® Techniques
Drive-by Compromise
User Execution: Malicious File
Browser Extensions
Input Capture: Keylogging
Email Collection: Email Forwarding Rule
Data from Local System
Exfiltration Over C2 Channel
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Logging and Monitoring of Access
Control ID: 10.2.1
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA (Digital Operational Resilience Act) – ICT Risk Management
Control ID: Art. 10(1)
CISA ZTMM 2.0 – Asset Inventory and Visibility
Control ID: Asset Management: 1.1
NIS2 Directive – Cybersecurity Risk Management and Security Measures
Control ID: Art. 21(2)(d)
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI workflow infostealers targeting Chrome extensions expose sensitive development data, requiring enhanced egress security and zero trust segmentation for AI copilot integrations.
Financial Services
Malicious AI extensions stealing ChatGPT conversations threaten confidential financial workflows, necessitating encrypted traffic monitoring and anomaly detection for AI tool usage.
Health Care / Life Sciences
Infostealer attacks on AI assistants risk HIPAA violations through stolen patient data conversations, demanding secure hybrid connectivity and threat detection capabilities.
Legal Services
AI copilot data theft compromises attorney-client privileged communications in browser workflows, requiring multicloud visibility and policy enforcement for shadow AI usage.
Sources
- Model Security Is the Wrong Frame – The Real Risk Is Workflow Securityhttps://thehackernews.com/2026/01/model-security-is-wrong-frame-real-risk.htmlVerified
- Chrome malware steals ChatGPT and DeepSeek chats from 900khttps://cybernews.com/security/chrome-extensions-steal-chatgpt-data/Verified
- 900K Users Compromised: Malicious AI Chrome Extensions Steal ChatGPT and DeepSeek Conversationshttps://astrix.security/learn/blog/900k-users-compromised-malicious-ai-chrome-extensions-steal-chatgpt-and-deepseek-conversations/Verified
- Malicious Chrome extensions with 900,000 users steal AI chatshttps://cyberinsider.com/malicious-chrome-extensions-with-900000-users-steal-ai-chats/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Applying CNSF and Zero Trust controls like east-west segmentation, egress filtering, inline threat detection, and encrypted traffic enforcement would have significantly disrupted key attack steps—blocking lateral spread, detecting malicious C2, and stopping unapproved exfiltration of sensitive workflow data.
Control: Threat Detection & Anomaly Response
Mitigation: High-confidence alerting and detection of anomalous extension behavior.
Control: Zero Trust Segmentation
Mitigation: Limited the extension’s network reach and data access privileges.
Control: East-West Traffic Security
Mitigation: Blocked unauthorized internal lateral communication attempts.
Control: Cloud Firewall (ACF) & Inline IPS (Suricata)
Mitigation: Detected and blocked suspicious outbound connections and known C2 channels.
Control: Egress Security & Policy Enforcement
Mitigation: Prevented unapproved data exfiltration to external hosts.
Minimized exposure and utility of stolen data by enforcing encryption in transit.
Impact at a Glance
Affected Business Functions
- Research and Development
- Customer Support
- Product Management
Estimated downtime: 5 days
Estimated loss: $500,000
The malicious Chrome extensions exfiltrated sensitive AI chat conversations and browsing data, potentially exposing proprietary code, business strategies, and personal information. This data could be exploited for corporate espionage, identity theft, or targeted phishing campaigns.
Recommended Actions
Key Takeaways & Next Steps
- • Deploy Zero Trust Segmentation to isolate user, SaaS, and workload resources, limiting lateral access from compromised endpoints.
- • Enforce comprehensive egress filtering and FQDN controls to immediately block illicit data exfiltration attempts from browser or workload channels.
- • Integrate inline IPS and cloud-native firewall solutions for deep packet inspection and real-time threat signature matching on both north-south and east-west cloud traffic.
- • Mandate high-performance encryption for all data in transit between users, workloads, and cloud/SaaS services to mitigate exposure if interception occurs.
- • Implement centralized visibility and anomaly detection for rapid detection and response to abnormal process and network behaviors linked to rogue browser extensions or infostealers.



