Executive Summary
In November 2025, security researchers uncovered a novel method by which ServiceNow's Now Assist generative AI platform could be manipulated through second-order prompt injection attacks. By exploiting default configurations and inherent agent-to-agent communication, attackers could coerce agentic AI features into executing unauthorized operations. This exposure allowed malicious actors to access, copy, and exfiltrate sensitive enterprise data without proper user authorization. The attack leverages prompt injection to bypass intended policy boundaries, posing significant data risk to organizations relying on ServiceNow’s AI-driven automations.
This incident highlights a growing threat landscape in which AI agent-to-agent interactions are harnessed for sophisticated attacks. With increased enterprise adoption of generative AI and autonomous agents, security around configuration and prompt validation has become mission-critical. Organizations should assess agent communication safeguards and be vigilant against emerging prompt injection and shadow AI risks.
Why This Matters Now
Second-order prompt injection targeting multi-agent AI platforms is rapidly emerging, often outpacing currently deployed security controls. As enterprise reliance on agentic AI accelerates, unchecked inter-agent communication and default settings present an urgent risk of unintentional data exposure and compliance violations.
Attack Path Analysis
The attacker initiated the compromise by exploiting default configurations in ServiceNow's Now Assist AI platform through a second-order prompt injection, leading to unauthorized agent actions. The attacker then escalated privileges by leveraging agentic capabilities to access workflows or information beyond their original scope. Using cross-agent communication, the attacker moved laterally to involve and manipulate additional AI agents in the environment. Malicious command and control was established by issuing crafted prompts that directed agents to execute attacker-controlled tasks persistently. Sensitive data was exfiltrated as the agents copied and exported confidential information outside the organization. The attack culminated in the potential impact of business disruption, data loss, and possible regulatory non-compliance.
Kill Chain Progression
Initial Compromise
Description
Adversary exploited insecure defaults and prompt injection vulnerabilities in Now Assist's AI agent configuration to execute unauthorized actions.
Related CVEs
CVE-2024-4879
CVSS 9.3An input validation vulnerability in ServiceNow's Vancouver and Washington, D.C. Now Platform releases allows unauthenticated users to remotely execute code within the platform.
Affected Products:
ServiceNow Now Platform – Vancouver, Washington D.C.
Exploit Status:
exploited in the wildCVE-2024-8923
CVSS 9.8A critical sandbox escape vulnerability in ServiceNow's Now Platform allows unauthenticated users to perform remote code execution.
Affected Products:
ServiceNow Now Platform – Prior to Xanadu General Availability
Exploit Status:
proof of conceptCVE-2024-8924
CVSS 7.5A blind SQL injection vulnerability in ServiceNow's Now Platform allows unauthenticated users to retrieve unauthorized data.
Affected Products:
ServiceNow Now Platform – Xanadu, Washington D.C., Earlier Releases
Exploit Status:
proof of concept
MITRE ATT&CK® Techniques
Data Manipulation: Stored Data Manipulation
Command and Scripting Interpreter
Exploit Public-Facing Application
Exfiltration Over C2 Channel
Forge Web Credentials: Web Session Cookie
Modify Authentication Process
User Execution: Malicious File
Potential Compliance Exposure
Mapping incident impact across multiple compliance frameworks.
PCI DSS 4.0 – Review of Security Events and Activity
Control ID: 10.2.5
NYDFS 23 NYCRR 500 – Cybersecurity Policy
Control ID: 500.03
DORA – ICT Risk Management Framework
Control ID: Art. 10
CISA ZTMM 2.0 – Continuous Authentication and Monitoring
Control ID: Identity Pillar – Continuous Authentication
NIS2 Directive – Technical and Organizational Measures
Control ID: Article 21(2)
Sector Implications
Industry-specific impact of the vulnerabilities, including operational, regulatory, and cloud security risks.
Information Technology/IT
ServiceNow AI agent vulnerabilities expose IT service management platforms to prompt injection attacks, enabling unauthorized data exfiltration and compromising automated workflows.
Financial Services
AI-driven financial platforms using ServiceNow face second-order prompt injection risks, potentially compromising sensitive customer data and violating regulatory compliance requirements.
Health Care / Life Sciences
Healthcare organizations leveraging ServiceNow AI assistants risk patient data exposure through agent-to-agent exploitation, violating HIPAA compliance and compromising medical workflows.
Government Administration
Government agencies using ServiceNow AI platforms vulnerable to prompt injection attacks targeting sensitive administrative data and potentially compromising citizen information security.
Sources
- ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Promptshttps://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.htmlVerified
- ServiceNow Remote Code Execution Vulnerability | Fortrahttps://www.fortra.com/security/emerging-threats/servicenow-remote-code-execution-vulnerabilityVerified
- ServiceNow Now Platform Vulnerabilities Enable RCE and SQL Injection Risks (CVE-2024-8923, CVE-2024-8924) – Patch Nowhttps://socradar.io/blog/servicenow-now-platform-vulnerabilities-cve-2024-8923/Verified
- Second-order prompt injection can turn AI into a malicious insiderhttps://www.techradar.com/pro/security/second-order-prompt-injection-can-turn-ai-into-a-malicious-insiderVerified
Frequently Asked Questions
Cloud Native Security Fabric Mitigations and ControlsCNSF
Applying Zero Trust segmentation, granular egress controls, continuous threat detection, and centralized visibility would have substantially limited AI agent abuse, lateral movement, and data exfiltration risk. CNSF-aligned controls restrict unauthorized agent actions, inspect east-west traffic, and provide policy enforcement that could have blocked or rapidly detected each kill chain phase.
Control: Cloud Native Security Fabric (CNSF)
Mitigation: Inline policy enforcement could have detected and blocked abnormal agent interactions early.
Control: Zero Trust Segmentation
Mitigation: Microsegmentation would have restricted agent access to only necessary workflows and identities.
Control: East-West Traffic Security
Mitigation: Inspection and segmentation of internal service-to-service communications would limit attack propagation.
Control: Threat Detection & Anomaly Response
Mitigation: Anomaly detection would rapidly alert and automate response to unusual agent communication patterns.
Control: Egress Security & Policy Enforcement
Mitigation: Outbound data transfer would be blocked or alerted via FQDN filtering and policy enforcement.
Centralized visibility would enable rapid detection and mitigation of unauthorized agent-initiated actions.
Impact at a Glance
Affected Business Functions
- IT Operations
- Customer Service
- Human Resources
Estimated downtime: 5 days
Estimated loss: $500,000
Potential exposure of sensitive corporate data, including customer information and internal communications, due to unauthorized access facilitated by AI agent manipulation.
Recommended Actions
Key Takeaways & Next Steps
- • Enforce Zero Trust segmentation for all AI and agentic workloads to isolate and restrict agent communications.
- • Deploy east-west traffic inspection to monitor and block unauthorized internal service-to-service interactions.
- • Implement strict egress policy enforcement and outbound filtering to prevent unintended data exfiltration from AI workloads.
- • Activate threat detection and anomaly response capabilities to continuously baseline and alert on irregular AI or agent activity.
- • Establish centralized visibility and governance for all cloud traffic to rapidly identify and respond to incidents arising from AI risk.



