Executive Summary
In January 2026, cybersecurity researchers uncovered two malicious Chrome extensions—'Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI' and 'AI Sidebar with Deepseek, ChatGPT, Claude, and more.'—that secretly exfiltrated ChatGPT, DeepSeek conversations, and extensive browsing data from over 900,000 users. These extensions masqueraded as legitimate browser tools but harvested sensitive data by scraping web pages and Chrome tabs, transmitting this information to attacker-controlled command-and-control servers every 30 minutes. This breach potentially exposed confidential business information, intellectual property, and user identities, underscoring the heightened risks posed by seemingly innocuous browser add-ons in enterprise environments.
The incident marks a broader uptick in malicious and even some legitimate browser extensions turning to 'prompt poaching'—stealing user interactions with AI and chatbots. As AI adoption accelerates, organizations face new data exposure risks, demanding updated monitoring, awareness, and policy enforcement around browser extensions.
Why This Matters Now
Prompt poaching via browser extensions is rapidly becoming a common data exfiltration method targeting sensitive AI-powered chat content. The widespread use of browser-based AI tools in corporate settings creates urgent risks, especially as attackers exploit trusted extension marketplaces. Organizations must urgently review extension permissions and enforce controls to mitigate exposure.
Attack Path Analysis
Attackers initially compromised user systems through malicious Chrome extensions masquerading as legitimate AI helpers, successfully tricking users into granting broad permissions. The extensions did not require further privilege escalation, as they operated within the granted browser context. Lateral movement was limited to the browser environment, scraping conversations and tab data. The malware maintained periodic command and control connectivity with remote servers to receive configuration updates and exfiltration instructions. Sensitive AI chat content and browsing data were exfiltrated to attacker-controlled infrastructure every 30 minutes. The stolen data posed risks of corporate espionage, identity theft, and downstream phishing campaigns.
Kill Chain Progression
Initial Compromise
Description
Users were tricked into installing malicious Chrome extensions that posed as productivity or AI tools, granting them necessary permissions.
MITRE ATT&CK® Techniques
Techniques are mapped for prompt-poaching infostealer browser extensions. This may be expanded with full STIX/TAXII enrichment later.
Browser Extensions
Audio Capture
Input Capture: Web Portal Capture
Automated Collection
Exfiltration Over Web Service: Exfiltration to Cloud Storage
JavaScript
System Script Proxy Execution
Indicator Removal on Host: Timestomp
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS v4.0 – Sensitive Data Protection in Transmission
Control ID: 3.2.1
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA (EU Digital Operational Resilience Act) – ICT Risk Management Obligations
Control ID: Article 9(1)
CISA Zero Trust Maturity Model 2.0 – Control Over End User Device Software
Control ID: Identity Pillar: Device Security
NIS2 Directive – Security of Supply Chain and System Use
Control ID: Article 21(2)(e)
GDPR – Security of Processing
Control ID: Article 32
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
Chrome extension infostealers targeting 900,000 users pose critical risks to AI conversations, intellectual property, and development workflows requiring egress security controls.
Financial Services
Browser-based data exfiltration threatens client conversations, trading strategies, and confidential financial data shared through AI chatbots requiring enhanced endpoint protection.
Health Care / Life Sciences
Malicious extensions capturing AI conversations risk exposing patient data, research findings, and HIPAA-regulated communications requiring zero trust segmentation and encrypted traffic.
Legal Services
Attorney-client privileged communications through AI tools face exposure via prompt poaching attacks, demanding strict egress filtering and anomaly detection capabilities.
Sources
- Two Chrome Extensions Caught Stealing ChatGPT and DeepSeek Chats from 900,000 Usershttps://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlVerified
- This new malware campaign is stealing chat logs via Chrome extensionshttps://www.techradar.com/pro/security/this-new-malware-campaign-is-stealing-chat-logs-via-chrome-extensionsVerified
- Chrome extension malware steals ChatGPT and DeepSeek chats from 900khttps://cybernews.com/security/chrome-extensions-steal-chatgpt-data/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Zero Trust segmentation, strict egress policy enforcement, and network-level anomaly detection could have contained or prevented the malicious browser extension's ability to reach attacker infrastructure and exfiltrate sensitive data. CNSF controls such as egress filtering, east-west segmentation, and real-time threat detection are critical to limiting the exposure of sensitive corporate or AI-derived data to unauthorized external destinations.
Control: Multicloud Visibility & Control
Mitigation: Centralized visibility enables rapid detection of anomalous browser extension installations or shadow IT usage.
Control: Zero Trust Segmentation
Mitigation: Isolation of critical workloads or sensitive data zones restricts lateral access by compromised endpoints.
Control: East-West Traffic Security
Mitigation: Monitors and restricts unauthorized internal data flows between browser-accessible resources and sensitive services.
Control: Threat Detection & Anomaly Response
Mitigation: Detects suspicious outbound connections pattern and raises alerts for unauthorized external traffic.
Control: Egress Security & Policy Enforcement
Mitigation: Outbound data transfers to unapproved domains are blocked, containing data loss.
Provides autonomous inline enforcement and security automation to quickly quarantine affected resources.
Impact at a Glance
Affected Business Functions
- Research and Development
- Customer Support
- Internal Communications
Estimated downtime: 3 days
Estimated loss: $500,000
The malicious extensions exfiltrated sensitive data from ChatGPT and DeepSeek conversations, including proprietary code, business strategies, and personal information. This data exposure poses risks of corporate espionage, identity theft, and targeted phishing attacks.
Recommended Actions
Key Takeaways & Next Steps
- • Enforce centralized visibility and monitoring of browser extension installations and SaaS integrations across all endpoints.
- • Apply zero trust network segmentation policies to isolate sensitive AI interactions and business data from broad browser or user access.
- • Implement strict egress filtering and URL/FQDN policies to block browser extensions from communicating with unauthorized or known malicious external servers.
- • Deploy real-time threat detection and anomaly response for both cloud and endpoint communications to swiftly identify unusual outbound data flows.
- • Regularly audit and retrain staff on risks of browser-based social engineering and require governance over third-party browser tools within corporate environments.



