Executive Summary
In February 2026, Anthropic, a U.S.-based AI company, reported that three Chinese AI firms—DeepSeek, Moonshot AI, and MiniMax—conducted large-scale distillation attacks to extract capabilities from its Claude AI model. These companies generated over 16 million interactions using approximately 24,000 fraudulent accounts, violating Anthropic's terms of service and regional access restrictions. The attacks focused on Claude's advanced features, including reasoning, coding, and tool use, aiming to enhance their own AI models without proper authorization.
This incident underscores the escalating risks of intellectual property theft in the AI sector, particularly through distillation techniques. Such activities not only compromise proprietary technologies but also pose significant national security concerns, as models developed through illicit means may lack essential safety measures, potentially facilitating malicious applications.
Why This Matters Now
The rapid advancement and deployment of AI technologies have heightened the urgency to protect intellectual property and ensure the integrity of AI models. Unauthorized distillation attacks, like those conducted by DeepSeek, Moonshot AI, and MiniMax, highlight the need for robust security measures and international cooperation to prevent the misuse of AI capabilities and safeguard against potential national security threats.
Attack Path Analysis
Adversaries initiated the attack by creating fraudulent accounts to access the Claude AI model, bypassing regional restrictions. They then escalated privileges by leveraging these accounts to interact extensively with the model, extracting sensitive capabilities. Subsequently, they moved laterally by coordinating multiple accounts and proxy services to distribute their activities, evading detection. Command and control were maintained through these proxy networks, allowing continuous interaction with Claude. Exfiltration occurred as the adversaries extracted and stored Claude's outputs to train their own AI models. The impact was the unauthorized acquisition of Claude's capabilities, potentially leading to competitive disadvantages and security risks.
Kill Chain Progression
Initial Compromise
Description
Adversaries created approximately 24,000 fraudulent accounts to access the Claude AI model, circumventing regional access restrictions.
MITRE ATT&CK® Techniques
Valid Accounts
Proxy
Brute Force
Automated Exfiltration
Data Manipulation
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Prevent unauthorized access to system components and data
Control ID: 6.4.3
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA – ICT Risk Management Framework
Control ID: Article 5
CISA ZTMM 2.0 – Implement strong authentication mechanisms
Control ID: Identity and Access Management
NIS2 Directive – Security of Network and Information Systems
Control ID: Article 21
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI model theft threatens core intellectual property and competitive advantages, requiring enhanced egress security and zero trust segmentation to prevent capability extraction.
Computer/Network Security
Industrial-scale distillation attacks expose vulnerabilities in AI service protection, demanding advanced threat detection and multicloud visibility for model integrity preservation.
Defense/Space
Weaponized unprotected AI capabilities from foreign adversaries create national security risks, necessitating encrypted traffic controls and comprehensive anomaly response systems.
Government Administration
Chinese AI firms' capability extraction poses surveillance and offensive cyber operation threats, requiring policy enforcement and secure hybrid connectivity implementations.
Sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Modelhttps://thehackernews.com/2026/02/anthropic-says-chinese-ai-firms-used-16.htmlVerified
- Anthropic accuses DeepSeek, other Chinese AI developers of 'industrial-scale' copyinghttps://www.tomshardware.com/tech-industry/artificial-intelligence/anthropic-accuses-deepseek-other-chinese-ai-developers-of-industrial-scale-copying-claims-distillation-included-24-000-fraudulent-accounts-and-16-million-exchanges-to-train-smaller-modelsVerified
- Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exportshttps://techcrunch.com/2026/02/23/anthropic-accuses-chinese-ai-labs-of-mining-claude-as-us-debates-ai-chip-exports/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Aviatrix Zero Trust CNSF is pertinent to this incident as it could have limited the adversaries' ability to exploit the Claude AI model by enforcing strict segmentation and identity-aware policies, thereby reducing the attack surface and potential impact.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: The adversaries' ability to create and utilize fraudulent accounts may have been constrained, limiting unauthorized access to the Claude AI model.
Control: Zero Trust Segmentation
Mitigation: The adversaries' ability to escalate privileges and extract sensitive capabilities could have been limited, reducing the scope of unauthorized interactions.
Control: East-West Traffic Security
Mitigation: The adversaries' lateral movement within the network may have been constrained, limiting their ability to coordinate activities across multiple accounts.
Control: Multicloud Visibility & Control
Mitigation: The adversaries' ability to maintain command and control channels could have been limited, reducing the persistence of their operations.
Control: Egress Security & Policy Enforcement
Mitigation: The adversaries' ability to exfiltrate sensitive data may have been constrained, limiting the unauthorized transfer of Claude's outputs.
The overall impact of the attack could have been limited, reducing potential competitive disadvantages and security risks.
Impact at a Glance
Affected Business Functions
- AI Model Development
- Intellectual Property Management
- Regulatory Compliance
Estimated downtime: N/A
Estimated loss: N/A
Potential exposure of proprietary AI model outputs and capabilities.
Recommended Actions
Key Takeaways & Next Steps
- • Implement Zero Trust Segmentation to restrict access based on identity and context, preventing unauthorized account creation.
- • Enhance Threat Detection & Anomaly Response systems to identify and respond to unusual account behaviors indicative of fraudulent activities.
- • Utilize Egress Security & Policy Enforcement to monitor and control outbound data flows, preventing unauthorized data exfiltration.
- • Deploy Multicloud Visibility & Control to gain comprehensive insights into network traffic and detect coordinated malicious activities.
- • Apply Inline IPS (Suricata) to inspect and block malicious traffic patterns associated with unauthorized data extraction attempts.



