Executive Summary
In early June 2024, security researchers revealed an active campaign—dubbed the Bizarre Bazaar operation—where threat actors systematically scanned for and exploited publicly exposed Large Language Model (LLM) service endpoints. Attackers hijacked these AI/ML endpoints by bypassing inadequate API controls and leveraging unsecured cloud configurations, enabling unauthorized access to advanced AI resources. Compromised infrastructure became part of an underground market offering illicit AI compute power, leading to business risks ranging from intellectual property leakage to tool misuse and service disruption for impacted organizations.
This incident spotlights the growing exploitation of AI infrastructure, with attackers rapidly adopting novel tactics as organizations rush to deploy LLMs. Weak segmentation, lack of egress controls, and poor visibility have left many organizations vulnerable to sophisticated abuse, elevating urgency for robust enterprise AI security and compliance measures.
Why This Matters Now
The rapid adoption of LLMs in enterprise environments has outpaced security readiness, making exposed AI endpoints a prime target for cybercriminals. With attackers increasingly commercializing hijacked AI resources and regulatory scrutiny intensifying, organizations must urgently assess and secure their AI/ML infrastructure to mitigate evolving risks.
Attack Path Analysis
Attackers discovered and scanned for exposed LLM endpoints lacking sufficient access controls, exploiting misconfigurations to gain initial access. Upon entry, they leveraged weak isolation and identity management to escalate privileges within cloud AI infrastructure. The attackers attempted lateral movement between services and workloads, seeking to expand their access footprint. Subsequently, they established C2 channels using cloud-native protocols to maintain persistence and coordinate further actions. Sensitive data and AI assets were exfiltrated over unauthorized outbound channels. Ultimately, the attackers monetized access or disrupted AI services, potentially degrading business operations.
Kill Chain Progression
Initial Compromise
Description
Attackers scanned for and targeted publicly exposed LLM service endpoints lacking network or identity-based restrictions to gain unauthorized access.
Related CVEs
CVE-2025-59146
CVSS 8.5An authenticated Server-Side Request Forgery (SSRF) vulnerability in New API allows attackers to coerce the server into sending requests to arbitrary internal or external services.
Affected Products:
New API New API – < 0.9.0.5
Exploit Status:
proof of conceptCVE-2025-63390
CVSS 5.3An authentication bypass vulnerability in AnythingLLM v1.8.5 allows unauthenticated remote attackers to enumerate and retrieve detailed information about all configured workspaces.
Affected Products:
AnythingLLM AnythingLLM – 1.8.5
Exploit Status:
no public exploitCVE-2025-62426
CVSS 6.5A vulnerability in vLLM allows authenticated attackers to cause a denial of service by manipulating the chat_template_kwargs parameter in chat completion and tokenization endpoints.
Affected Products:
vLLM vLLM – 0.5.5 to < 0.11.1
Exploit Status:
no public exploitReferences:
CVE-2026-21445
CVSS 9.1Critical API endpoints in Langflow lack authentication checks, allowing unauthenticated attackers to fully compromise the instance, steal API keys, and exfiltrate server files.
Affected Products:
Langflow Langflow – < 1.3.0
Exploit Status:
exploited in the wildCVE-2024-8251
CVSS 5.3A vulnerability in AnythingLLM allows for Prisma injection in the API endpoint "/embed/:embedId/stream-chat", enabling attackers to expose all data from the table.
Affected Products:
AnythingLLM AnythingLLM – < 1.2.2
Exploit Status:
proof of conceptReferences:
CVE-2024-12779
CVSS 7.5A Server-Side Request Forgery (SSRF) vulnerability in Ragflow allows attackers to specify arbitrary URLs, potentially accessing and reading contents from internal systems.
Affected Products:
Ragflow Ragflow – 0.12.0
Exploit Status:
proof of conceptReferences:
CVE-2024-3152
CVSS 8.8Multiple vulnerabilities in AnythingLLM allow attackers to escalate privileges, read and delete arbitrary files, and perform SSRF attacks due to improper input validation.
Affected Products:
AnythingLLM AnythingLLM – < 1.0.0
Exploit Status:
no public exploitCVE-2024-3149
CVSS 8.8A Server-Side Request Forgery (SSRF) vulnerability in AnythingLLM's upload link feature allows attackers to perform actions such as internal port scanning and accessing internal web applications.
Affected Products:
AnythingLLM AnythingLLM – < 1.0.0
Exploit Status:
no public exploit
MITRE ATT&CK® Techniques
Techniques listed provide high-level mapping for SEO/filtering. Full STIX/TAXII enrichment to follow in production integration.
Exploit Public-Facing Application
External Remote Services
Valid Accounts
Modify Authentication Process: Network Device Authentication
Exfiltration Over Web Service
Resource Hijacking
Exploitation of Remote Services
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS v4.0 – Define and Implement User Identification and Authentication
Control ID: 8.1.1
NYDFS 23 NYCRR 500 – Access Privileges
Control ID: 500.07
DORA – ICT Risk Management
Control ID: Article 10.1
CISA Zero Trust Maturity Model 2.0 – Strong Authentication for Devices and Services
Control ID: Identity Pillar: Device and Endpoint Authentication
NIS2 Directive – Incident Handling Capabilities
Control ID: Article 21(2)(d)
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Computer Software/Engineering
AI/ML infrastructure compromise targeting exposed LLM endpoints threatens software development pipelines, agentic AI systems, and cloud-native security fabric implementations requiring zero trust segmentation.
Financial Services
Bizarre Bazaar operation exploiting LLM endpoints poses data exfiltration risks through shadow AI usage, requiring enhanced egress security and compliance with PCI standards.
Information Technology/IT
Hijacked LLM service endpoints enable lateral movement and privilege escalation across multicloud environments, demanding encrypted traffic controls and Kubernetes security enforcement mechanisms.
Computer/Network Security
Commercial exploitation of AI infrastructure exposes security vendors to prompt injection attacks and unauthorized model access, necessitating inline IPS and threat detection capabilities.
Sources
- Hackers hijack exposed LLM endpoints in Bizarre Bazaar operationhttps://www.bleepingcomputer.com/news/security/hackers-hijack-exposed-llm-endpoints-in-bizarre-bazaar-operation/Verified
- CVE-2025-59146 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2025-59146Verified
- CVE-2025-63390 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2025-63390Verified
- CVE-2025-62426 - Exploits & Severity - Feedlyhttps://feedly.com/cve/CVE-2025-62426Verified
- Langflow RCE Flaw Actively Exploited: CISA Urges Immediate Patchhttps://dailysecurityreview.com/security-spotlight/langflow-rce-flaw-actively-exploited-cisa-urges-immediate-patch/Verified
- CVE-2024-8251 - Exploits & Severity - Feedlyhttps://feedly.com/cve/CVE-2024-8251Verified
- CVE-2024-12779 - Exploits & Severity - Feedlyhttps://feedly.com/cve/CVE-2024-12779Verified
- CVE-2024-3152 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2024-3152Verified
- CVE-2024-3149 Detailhttps://nvd.nist.gov/vuln/detail/CVE-2024-3149Verified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
This incident clearly demonstrates Zero Trust and CNSF relevance: stronger segmentation, identity controls, workload isolation, and egress governance could have prevented or detected unauthorized access and lateral movement through cloud AI services. Proper enforcement of these controls would constrain attacker actions at each stage, limiting both the blast radius and potential data loss.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: Access attempt would be blocked or denied at the perimeter.
Control: Zero Trust Segmentation
Mitigation: Privilege escalation paths would be contained or alerted.
Control: East-West Traffic Security
Mitigation: East-west traversal to other services would be blocked or detected.
Control: Multicloud Visibility & Control
Mitigation: Unusual outbound or API-driven control channels would be flagged.
Control: Egress Security & Policy Enforcement
Mitigation: Unauthorized outbound transfers would be blocked or logged.
Operational or financial impact could have been limited if earlier attack stages were effectively constrained.
Impact at a Glance
Affected Business Functions
- AI Infrastructure Management
- Data Processing
- Internal System Security
Estimated downtime: 3 days
Estimated loss: $500,000
Potential exposure of sensitive organizational data, including AI model configurations, API keys, and internal communications.
Recommended Actions
Key Takeaways & Next Steps
- • Enforce zero trust segmentation around LLM endpoints and sensitive workloads to eliminate unauthorized direct access paths.
- • Deploy continuous, centralized visibility tools to monitor for suspicious automation, malformed requests, and anomalous service usage across cloud infrastructure.
- • Implement strict outbound (egress) filtering and data loss prevention policies for all AI workloads to block unapproved data transfers.
- • Regularly audit and tighten cloud IAM roles, network controls, and workload permissions to minimize privilege escalation risk.
- • Automate detection and response workflows leveraging threat intelligence to quickly remediate unusual activity targeting AI/ML services.

