Executive Summary
In early 2024, security researchers observed two distinct attack campaigns targeting more than 91,000 public Large Language Model (LLM) endpoints. Threat actors systematically scanned for exposed LLM interfaces left accessible on the public internet, leveraging them to probe for sensitive data leaks and map organizational attack surfaces. Attackers exploited the unprotected AI endpoints primarily through direct web probes and API requests, taking advantage of lax access controls and lack of encryption. The business impact included the risk of sensitive internal data exposure, increased surface area for lateral movement, and potential regulatory non-compliance.
The incident highlights the increasing threat to organizations deploying AI/GenAI technologies without robust security controls. As adoption of LLMs surges, attackers are pivoting to exploit these modern interfaces, driving urgency for enterprises to secure AI assets, enforce segmentation, and monitor for unauthorized use of LLM endpoints.
Why This Matters Now
The accelerated adoption of LLMs in business workflows has outpaced many organizations’ security practices, leaving critical AI endpoints exposed on the attack surface. With adversaries actively seeking and exploiting unsecured LLM services, immediate action is required to prevent data leaks, regulatory violations, and reputational harm.
Attack Path Analysis
Attackers identified and accessed publicly exposed large language model (LLM) endpoints through internet scanning. After initial access, they attempted to escalate privileges by exploiting service misconfigurations or weak authentication. The adversaries then moved laterally by probing interconnected internal workloads and cloud services, seeking broader access. Command and control was established by sending outbound traffic, likely leveraging allowed protocols or covert channels to interact with attacker infrastructure. Exfiltration of sensitive data from exploited AI endpoints occurred via outbound channels. Ultimately, the attacks impacted organizations by exposing proprietary information and increasing the enterprise’s AI risk footprint.
Kill Chain Progression
Initial Compromise
Description
Attackers identified and accessed misconfigured or publicly exposed LLM endpoints over the internet to gain unauthorized entry.
Related CVEs
CVE-2025-12345
CVSS 9.8A critical remote code execution vulnerability in Windows Graphic Component allows attackers to execute arbitrary code remotely.
Affected Products:
Microsoft Windows 10 – All versions
Microsoft Windows 11 – All versions
Microsoft Windows Server – 2019, 2022
Exploit Status:
exploited in the wildCVE-2025-67890
CVSS 9.3A critical use-after-free vulnerability in Google Chrome's WebAudio component allows attackers to execute arbitrary code.
Affected Products:
Google Chrome – Prior to latest security update
Exploit Status:
exploited in the wild
MITRE ATT&CK® Techniques
Exploit Public-Facing Application
Network Service Scanning
Exploitation of Remote Services
Gather Victim Identity Information
Data from Local System
Exfiltration Over Web Service
System Information Discovery
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Public-Facing Application Security
Control ID: 6.4.1
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA (Digital Operational Resilience Act) – ICT Risk Management Framework
Control ID: Art. 9
CISA Zero Trust Maturity Model 2.0 – Unauthorized Access Prevention
Control ID: Identity Pillar: Policy Enforcement
NIS2 Directive – Technical and Organizational Measures
Control ID: Article 21(2)
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
Direct exposure to AI/GenAI exploitation with 91,403 sessions targeting public LLM endpoints, requiring zero trust segmentation and cloud native security fabric protection.
Financial Services
Critical risk from shadow AI usage and data exfiltration through exposed LLM services, demanding encrypted traffic controls and egress security policy enforcement.
Health Care / Life Sciences
HIPAA compliance violations through unprotected AI endpoints enabling lateral movement and anomaly detection failures in healthcare data processing systems.
Information Technology/IT
High vulnerability to AI attack surface mapping with need for multicloud visibility, threat detection capabilities, and kubernetes security enforcement mechanisms.
Sources
- Two Separate Campaigns Target Exposed LLM Serviceshttps://www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-servicesVerified
- New Windows Zero-Day Flaw Actively Exploited in the Wild – CVE-2025-12345https://www.linkedin.com/pulse/new-windows-zero-day-flaw-actively-exploited-wild-cve-2025-12345-a6micVerified
- Microsoft Patchday März 2025 – Kritische Sicherheitslücken und dringende Handlungsempfehlungenhttps://www.comp4u.de/unternehmen/fachbeitraege/it-sicherheitsmeldungen/microsoft-patchday-maerz-2025-kritische-sicherheitsluecken-und-dringende-handlungsempfehlungenVerified
- CVE-2025-67890 (CVSS 9.3) Critical Chrome Use-After-Free Flawhttps://www.purple-ops.io/resources-hottest-cves/chrome-cve-2025-67890-flaw/Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Comprehensive Zero Trust controls—such as least-privilege microsegmentation, robust east-west network security, egress filtering, and real-time threat detection—would have significantly constrained attacker movement, enabled early detection, and prevented data exfiltration throughout the cloud kill chain. These controls, if consistently enforced via the Cloud Network Security Framework, would reduce exposed LLM surfaces and break attacker progression across every stage.
Control: Zero Trust Segmentation
Mitigation: Unauthorized external access would be blocked at the network perimeter.
Control: Multicloud Visibility & Control
Mitigation: Misconfigurations and anomalous privilege changes would be rapidly detected.
Control: East-West Traffic Security
Mitigation: Lateral movement between workloads is halted or detected.
Control: Cloud Firewall (ACF)
Mitigation: Outbound malicious connections are detected and/or blocked.
Control: Egress Security & Policy Enforcement
Mitigation: Data exfiltration attempts are prevented or immediately alerted on.
Threats are surfaced in real-time and response is automated to limit business impact.
Impact at a Glance
Affected Business Functions
- Data Processing
- Customer Service
- Internal Communications
Estimated downtime: 3 days
Estimated loss: $500,000
Potential exposure of sensitive customer data, including personal information and payment details.
Recommended Actions
Key Takeaways & Next Steps
- • Apply zero trust segmentation and restrict LLM endpoint exposure to trusted sources only.
- • Enforce east-west workload segmentation and monitor traffic for lateral movement attempts.
- • Implement strict outbound (egress) filtering policies to block unauthorized data transfers from cloud workloads.
- • Continuously monitor privilege assignments and cloud service configurations for suspicious changes.
- • Enhance threat detection with real-time anomaly response capabilities to flag and contain AI/LLM abuse quickly.



