Executive Summary
In December 2025, security researchers exposed that the popular Urban VPN Proxy browser extension—marketed for privacy—was actively harvesting and exfiltrating sensitive conversation data from over eight million users interacting with leading AI chatbot platforms such as ChatGPT, Claude, Gemini, and Copilot. The malicious behavior was introduced in versions released after July 2025, with the extension injecting scripts into browser sessions to intercept, package, and transmit users’ chatbot prompts, responses, and session metadata to servers operated by Urban VPN’s parent, BiScience, a known data broker. Users were not offered any meaningful way to disable this data collection besides uninstalling the extension, and the privacy disclosure was deeply buried within the setup process, leaving the majority unaware.
This incident underscores the growing risk posed by privacy-violating browser extensions, especially those with elevated reputations and millions of installations. As AI assistants become repositories for sensitive personal and corporate data, the implications of such data leaks—from regulatory compliance to business confidentiality—are amplified, driving urgent reassessment of browser extension governance and AI data security controls.
Why This Matters Now
The Urban VPN incident highlights the urgent need for organizations to scrutinize browser extension behaviors and enforce stricter controls, as even highly-rated or 'featured' tools can introduce major privacy risks. With AI chatbots increasingly embedded in business workflows, data harvested by bad actors or data brokers can result in sensitive leakage, regulatory non-compliance, and reputational damage.
Attack Path Analysis
The attack began when users installed a seemingly legitimate browser extension (Urban VPN Proxy), granting it permissions to access browser data. The extension then used its elevated permissions to inject scripts and intercept AI chatbot traffic, escalating its capabilities. Though not a classic cloud lateral movement, the extension maintained persistent presence across multiple browsers and devices. Collected AI conversation data was packaged and sent to remote servers controlled by the threat actor via outbound network requests (command & control). The data was exfiltrated en masse to external analytics domains. The ultimate impact was a massive privacy violation and the commoditization of sensitive business and personal information.
Kill Chain Progression
Initial Compromise
Description
Users unknowingly installed a malicious browser extension advertised as a VPN, which was approved in trusted app stores and given wide browser permissions.
MITRE ATT&CK® Techniques
Browser Extensions
JavaScript
Modify Registry
Adversary-in-the-Middle: ARP Cache Poisoning
Account Discovery: Domain Account
Automated Exfiltration
Exfiltration Over C2 Channel
Data from Information Repositories: Web-based Email
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
GDPR (General Data Protection Regulation) – Security of Processing
Control ID: Art. 32
PCI DSS 4.0 – Risk Assessment Processes
Control ID: 12.2.1
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
NIS2 Directive – Cybersecurity Risk Management Measures
Control ID: Art. 21
CISA Zero Trust Maturity Model 2.0 – Continuous Assessment of Device Integrity
Control ID: Identity Pillar – Device Management
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
Browser extension data harvesting exposes 8M users' AI conversations, compromising proprietary code discussions and development secrets through undetected background collection mechanisms.
Health Care / Life Sciences
Medical professionals sharing patient concerns with AI assistants face HIPAA violations as extension harvests sensitive healthcare conversations without proper consent disclosure.
Financial Services
Financial advisors discussing client details through AI chatbots risk regulatory breaches as harvested data includes financial information processed by data brokers.
Legal Services
Attorney-client privilege compromised when lawyers use AI assistants for case research while malicious extensions capture confidential legal communications and strategies.
Sources
- Browser Extension Harvests 8M Users' AI Chatbot Datahttps://www.darkreading.com/endpoint-security/chrome-extension-harvests-ai-chatbot-dataVerified
- This new malware campaign is stealing chat logs via Chrome extensionshttps://www.techradar.com/pro/security/this-new-malware-campaign-is-stealing-chat-logs-via-chrome-extensionsVerified
- Browser extensions with 8 million users collect extended AI conversationshttps://arstechnica.com/security/2025/12/browser-extensions-with-8-million-users-collect-extended-ai-conversations/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Zero Trust segmentation, robust egress policy enforcement, continuous traffic visibility, and anomaly detection at the network layer would have identified or blocked the malicious extension's outbound exfiltration and limited impact. Applying least-privilege, egress filtering, and distributed CNSF controls enables rapid detection and restriction of unauthorized data flows, even when supply chain or app store trust is abused.
Control: Multicloud Visibility & Control
Mitigation: Enterprise-wide visibility would detect policy deviations such as unauthorized extensions.
Control: Zero Trust Segmentation
Mitigation: Isolation and segmentation of critical web application flows would mitigate the reach of malicious code.
Control: East-West Traffic Security
Mitigation: Restricts internal (east-west) movement of malicious or unauthorized traffic.
Control: Egress Security & Policy Enforcement
Mitigation: Outbound connections to unapproved domains would be blocked or closely monitored.
Control: Threat Detection & Anomaly Response
Mitigation: Anomalous high-volume or unusual outbound data transfers are detected in real time.
Distributed, inline enforcement autonomously mitigates threats and reduces systemic risk.
Impact at a Glance
Affected Business Functions
- Data Privacy Compliance
- User Trust Management
- Legal and Regulatory Affairs
Estimated downtime: N/A
Estimated loss: $5,000,000
Unauthorized collection and potential sale of sensitive AI chatbot conversations, including personal and proprietary information, affecting approximately 8 million users.
Recommended Actions
Key Takeaways & Next Steps
- • Implement network-layer egress filtering and FQDN/URL restrictions to prevent unauthorized outbound data flows from endpoints and SaaS sessions.
- • Leverage centralized, multicloud visibility to audit browser extension usage, blocking unapproved or risky add-ons organization-wide.
- • Enforce zero trust segmentation and least-privilege policies to restrict application and extension access to sensitive SaaS and AI services.
- • Deploy real-time anomaly detection and baselining to rapidly identify unusual patterns of outbound traffic suggestive of data harvesting or exfiltration.
- • Integrate cloud-native security fabric (CNSF) controls for distributed, automated enforcement—closing control gaps that exist beyond basic cloud perimeter measures.



