Executive Summary
In March 2026, Cisco researchers identified a critical vulnerability in Anthropic's Claude Code AI coding assistant, where compromised memory files allowed attackers to persistently infect projects and sessions. This flaw enabled the insertion of hard-coded secrets into production code, selection of insecure packages, and propagation of these changes to other development team members. Anthropic has since addressed the issue, but the incident underscores the inherent risks associated with AI memory files and context data.
The exploitation of AI memory files highlights a growing trend where attackers target the persistent state of AI systems to manipulate outputs and maintain unauthorized access. This incident serves as a cautionary tale for organizations integrating AI tools, emphasizing the need for robust security measures to protect against such vulnerabilities.
Why This Matters Now
The exploitation of AI memory files highlights a growing trend where attackers target the persistent state of AI systems to manipulate outputs and maintain unauthorized access. This incident serves as a cautionary tale for organizations integrating AI tools, emphasizing the need for robust security measures to protect against such vulnerabilities.
Attack Path Analysis
An attacker exploited a vulnerability in Claude Code's memory handling to inject malicious instructions into the MEMORY.md file, achieving persistent control over the AI assistant. This allowed the attacker to escalate privileges by manipulating the AI's behavior to execute unauthorized commands. The compromised AI facilitated lateral movement by propagating insecure practices across multiple projects and sessions. The attacker established command and control by maintaining persistent influence over the AI's responses and actions. Sensitive data was exfiltrated through the AI's manipulated outputs and interactions. The impact included the introduction of hardcoded secrets into production code and the systematic weakening of security patterns across the codebase.
Kill Chain Progression
Initial Compromise
Description
The attacker exploited a vulnerability in Claude Code's memory handling to inject malicious instructions into the MEMORY.md file, achieving persistent control over the AI assistant.
Related CVEs
CVE-2026-25723
CVSS 6.5An authentication bypass flaw in Anthropic Claude Code allows attackers to write to restricted directories.
Affected Products:
Anthropic Claude Code – < 2.0.55
Exploit Status:
no public exploitCVE-2025-59536
CVSS 8.8A code injection flaw in Anthropic Claude Code allows remote code execution before trust confirmation.
Affected Products:
Anthropic Claude Code – < 1.0.111
Exploit Status:
no public exploitCVE-2025-55284
CVSS 7.5A vulnerability in Claude Code allows unauthorized file reading and transmission over the network without user consent.
Affected Products:
Anthropic Claude Code – < 1.0.4
Exploit Status:
no public exploitCVE-2025-54794
CVSS 9.1A path traversal vulnerability in Anthropic Claude Code allows attackers to access files outside the current working directory.
Affected Products:
Anthropic Claude Code – < 0.2.111
Exploit Status:
no public exploitCVE-2025-64755
CVSS 9.8A file write vulnerability in Claude Code allows attackers to bypass read-only validation and write to arbitrary files.
Affected Products:
Anthropic Claude Code – < 2.0.31
Exploit Status:
no public exploitCVE-2026-25722
CVSS 9.1A path traversal vulnerability in Anthropic Claude Code allows attackers to bypass write protection in sensitive directories.
Affected Products:
Anthropic Claude Code – < 2.0.57
Exploit Status:
no public exploitCVE-2026-35603
CVSS 7.3On Windows, Claude Code loads system-wide default configuration without validating directory ownership, allowing low-privileged users to load malicious configurations.
Affected Products:
Anthropic Claude Code – < 2.1.75
Exploit Status:
no public exploitCVE-2026-21852
CVSS 7.5An information disclosure vulnerability in Claude Code allows malicious repositories to exfiltrate API keys before users confirm trust.
Affected Products:
Anthropic Claude Code – < 2.0.65
Exploit Status:
no public exploitCVE-2025-54795
CVSS 9.8A remote code execution vulnerability in Anthropic Claude Code allows attackers to bypass confirmation prompts and execute untrusted commands.
Affected Products:
Anthropic Claude Code – < 1.0.20
Exploit Status:
no public exploit
MITRE ATT&CK® Techniques
Stored Data Manipulation
Runtime Data Manipulation
User Execution: Malicious Link
LLM Prompt Injection
AI Agent Context Poisoning: Memory
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Secure Software Development Practices
Control ID: 6.4.3
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA – ICT Risk Management Framework
Control ID: Article 5
CISA ZTMM 2.0 – Data Security
Control ID: 3.1
NIS2 Directive – Cybersecurity Risk Management Measures
Control ID: Article 21
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI memory poisoning vulnerabilities in development tools like Claude Code enable persistent compromise, malicious code injection, and supply chain attacks affecting software development workflows.
Information Technology/IT
Memory file manipulation in AI systems creates persistent attack vectors requiring enhanced monitoring, zero-trust controls, and specialized scanning tools for AI infrastructure protection.
Financial Services
AI agent memory vulnerabilities pose significant risks to automated trading, fraud detection systems, and customer service applications requiring strict compliance and data integrity controls.
Health Care / Life Sciences
Compromised AI memory files in healthcare applications could manipulate diagnostic recommendations, treatment protocols, and patient data processing while violating HIPAA compliance requirements.
Sources
- Bad Memories Still Haunt AI Agentshttps://www.darkreading.com/vulnerabilities-threats/bad-memories-haunt-ai-agentsVerified
- Anthropic's Model Context Protocol includes a critical remote code execution vulnerabilityhttps://www.tomshardware.com/tech-industry/artificial-intelligence/anthropics-model-context-protocol-has-critical-security-flaw-exposedVerified
- Anthropic confirms it leaked 512,000 lines of Claude Code source code - spilling some of its biggest secretshttps://www.techradar.com/pro/security/anthropic-confirms-it-leaked-512-000-lines-of-claude-code-source-code-spilling-some-of-its-biggest-secretsVerified
- CVE-2026-25723: Anthropic Claude Code Auth Bypass Flawhttps://www.sentinelone.com/vulnerability-database/cve-2026-25723/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Aviatrix Zero Trust CNSF is pertinent to this incident as it could have limited the attacker's ability to escalate privileges, move laterally, and exfiltrate data by enforcing strict segmentation and identity-aware policies.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: The attacker's ability to exploit vulnerabilities in the AI assistant's memory handling could have been constrained, reducing the likelihood of achieving persistent control.
Control: Zero Trust Segmentation
Mitigation: The attacker's ability to escalate privileges and execute unauthorized commands could have been limited, reducing the scope of unauthorized actions.
Control: East-West Traffic Security
Mitigation: The attacker's ability to move laterally across multiple projects and sessions could have been constrained, reducing the spread of insecure practices.
Control: Multicloud Visibility & Control
Mitigation: The attacker's ability to maintain persistent influence over the AI's responses and actions could have been limited, reducing the effectiveness of command and control.
Control: Egress Security & Policy Enforcement
Mitigation: The attacker's ability to exfiltrate sensitive data through the AI's manipulated outputs and interactions could have been constrained, reducing data loss.
The attacker's ability to introduce hardcoded secrets and weaken security patterns could have been limited, reducing the overall impact on the codebase.
Impact at a Glance
Affected Business Functions
- Software Development
- Code Review
- Continuous Integration/Continuous Deployment (CI/CD)
Estimated downtime: 7 days
Estimated loss: $500,000
Potential exposure of proprietary source code and internal development tools.
Recommended Actions
Key Takeaways & Next Steps
- • Implement Zero Trust Segmentation to restrict AI assistants' access to critical systems and data.
- • Enhance Threat Detection & Anomaly Response capabilities to identify and respond to unusual AI behaviors.
- • Apply Inline IPS (Suricata) to detect and prevent exploitation attempts targeting AI systems.
- • Utilize Multicloud Visibility & Control to monitor AI interactions across different environments.
- • Regularly audit and sanitize AI memory files to prevent persistent compromises.



