Executive Summary
In early 2024, cybersecurity researchers from Unit 42 uncovered a series of novel prompt injection attack vectors targeting applications built on the Model Context Protocol (MCP), an emerging technology that connects large language models (LLMs) to external data sources and tools. Threat actors exploited weaknesses in MCP sampling to inject malicious prompts, enabling sensitive data exfiltration, command execution, and unauthorized access to downstream APIs. The compromised LLM applications posed significant risks across industries utilizing MCP to enhance automation and efficiency, ultimately raising concerns over AI/ML-powered business processes. The attack highlighted urgent visibility, policy enforcement, and segmentation shortfalls in cloud-native environments.
The MCP prompt injection incident underscores a surge in AI-driven threats, particularly as generative AI is rapidly being integrated into enterprise workflows. Regulatory bodies and CISOs now place a premium on robust framework adherence and continuous monitoring as generative AI vulnerabilities and supply-chain risks multiply.
Why This Matters Now
As organizations accelerate adoption of large language models and the Model Context Protocol, attackers are quickly evolving techniques to exploit prompt injection and weak context handling. With AI applications mediating critical connectivity, urgent attention to AI/ML security controls and compliance is essential to prevent supply-chain risks, data leakage, and operational disruptions.
Attack Path Analysis
Attackers exploited the Model Context Protocol (MCP) integration to introduce malicious prompt injections, granting unauthorized model access. Through these malicious inputs, they gained elevated privileges or access tokens to manipulate or expand application permissions. Once inside, adversaries leveraged weak east-west controls to pivot laterally within the connected cloud environment or ML infrastructure. External command and control was established over permitted application traffic flows, using covert channels or egress paths. Sensitive data and model artifacts were then exfiltrated through unmonitored outbound channels. Finally, attackers impacted the environment by manipulating model outputs, poisoning data, or disrupting business operations through the compromised LLM supply chain.
Kill Chain Progression
Initial Compromise
Description
Attackers delivered crafted prompt injections via the Model Context Protocol (MCP), exploiting unvalidated inputs to gain initial access to LLM-integrated cloud applications.
Related CVEs
CVE-2025-54073
CVSS 9.8A command injection vulnerability in mcp-package-docs allows remote code execution via unsanitized input parameters.
Affected Products:
sammcj mcp-package-docs – < 0.1.27
Exploit Status:
no public exploitCVE-2025-64109
CVSS 9A vulnerability in Cursor CLI Beta allows remote code execution through malicious MCP configurations.
Affected Products:
cursor Cursor CLI – <= 2025.09.17-25b418f
Exploit Status:
no public exploitCVE-2025-66401
CVSS 9.8Command injection in MCP Watch's cloneRepo method allows arbitrary command execution via unsanitized user input.
Affected Products:
kapilduraphe mcp-watch – <= 0.1.2
Exploit Status:
no public exploitCVE-2025-68143
CVSS 6.5Path traversal vulnerability in mcp-server-git allows unauthorized Git repository creation in arbitrary directories.
Affected Products:
modelcontextprotocol mcp-server-git – < 2025.9.25
Exploit Status:
no public exploitCVE-2024-10950
CVSS 9.8Prompt injection in gpt_academic's CodeInterpreter plugin allows remote code execution via untrusted code execution.
Affected Products:
binary-husky gpt_academic – <= 3.83
Exploit Status:
no public exploitCVE-2024-5565
CVSS 9.8Prompt injection in Vanna library's 'ask' method allows remote code execution via arbitrary Python code execution.
Affected Products:
Vanna Vanna library – <= 1.0.0
Exploit Status:
no public exploit
MITRE ATT&CK® Techniques
Data Manipulation: Stored Data Manipulation
User Execution: Malicious File
Command and Scripting Interpreter
Modify Authentication Process: Network Device Authentication
Access Token Manipulation
Exploit Public-Facing Application
Brute Force
Traffic Signaling
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Change and Vulnerability Management Processes
Control ID: 6.4.2
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA (Digital Operational Resilience Act) – ICT Risk Management Framework
Control ID: Art. 9
NIS2 Directive – Security of Network and Information Systems
Control ID: Art. 21(2)
CISA Zero Trust Maturity Model 2.0 – Identity Verification and Access Controls
Control ID: Identity Pillar: Policy Enforcement and Continuous Monitoring
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI/ML security vulnerabilities in Model Context Protocol create critical prompt injection risks for LLM applications, requiring enhanced zero trust segmentation and threat detection capabilities.
Information Technology/IT
MCP sampling attack vectors threaten IT infrastructure through compromised external data connections, necessitating strengthened egress security policies and multicloud visibility controls.
Financial Services
Prompt injection attacks against LLM systems pose severe compliance risks under PCI and regulatory frameworks, demanding robust encrypted traffic protection and anomaly detection.
Health Care / Life Sciences
AI security vulnerabilities in healthcare LLM applications threaten HIPAA compliance and patient data integrity, requiring comprehensive east-west traffic security and policy enforcement.
Sources
- New Prompt Injection Attack Vectors Through MCP Samplinghttps://unit42.paloaltonetworks.com/model-context-protocol-attack-vectors/Verified
- CVE-2025-54073 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2025-54073Verified
- CVE-2025-64109 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2025-64109Verified
- CVE-2025-66401 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2025-66401Verified
- CVE-2025-68143 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2025-68143Verified
- CVE-2024-10950 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2024-10950Verified
- CVE-2024-5565 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2024-5565Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Applying Zero Trust segmentation, egress security policies, traffic visibility, and anomaly detection across cloud and AI/ML workloads would have restricted unauthorized access, blocked prompt-induced lateral movement, and prevented covert exfiltration attempts. CNSF-aligned controls detect abnormal behaviors in AI-driven environments and enforce least privilege at the network and workload level.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: Inline inspection and distributed policy controls detect and block malicious prompt-induced API calls.
Control: Zero Trust Segmentation
Mitigation: Microsegmentation and identity-based policies restrict privilege boundaries.
Control: East-West Traffic Security
Mitigation: Internal network segmentation and traffic filtering block unauthorized east-west movement.
Control: Threat Detection & Anomaly Response
Mitigation: Continuous behavioral baselining detects and alerts on C2 traffic anomalies.
Control: Egress Security & Policy Enforcement
Mitigation: Policy-based egress filtering and FQDN/application controls prevent unauthorized data exfiltration.
Pod-level segmentation and namespace enforcement limit the blast radius of successful impact.
Impact at a Glance
Affected Business Functions
- Software Development
- AI/ML Operations
Estimated downtime: 5 days
Estimated loss: $500,000
Potential exposure of sensitive code repositories and intellectual property due to unauthorized access.
Recommended Actions
Key Takeaways & Next Steps
- • Implement Zero Trust Segmentation and microsegmentation for all AI/ML and cloud workloads to prevent intra-cloud lateral movement.
- • Enforce granular egress controls and FQDN filtering to block unauthorized data exfiltration from LLM-integrated environments.
- • Deploy Threat Detection & Anomaly Response with baselining to identify prompt injection misuse and covert C2 activity.
- • Require workload identity enforcement and namespace segmentation within Kubernetes clusters to minimize impact scope.
- • Establish continuous traffic visibility, policy automation, and real-time enforcement via CNSF-aligned controls across all cloud networks.



