TL;DR
Rapid AI adoption without governance creates a security gap that can cause data breaches and compliance violations.
Cloud service providers’ guardrails against Shadow AI are necessary, but not enough.
Use zero trust principles to mitigate the risks of Shadow AI at the network layer—freeing your organization to pursue AI innovation with robust security.
AI agents are simultaneously the most powerful productivity tools your organization has adopted and the least governed.
According to Cybersecurity Insiders' 2025 State of AI Data Security Report, 83% of organizations now use AI in daily operations, but only 13% have strong visibility into how that AI is being used.
That gap between adoption and governance is where Shadow AI thrives.
Shadow AI refers to the unauthorized use of LLMs and AI agents across your organization. Unlike traditional Shadow IT, which typically involved unsanctioned hardware or SaaS applications, Shadow AI operates at the data layer. When an employee pastes customer records into ChatGPT or a team embeds an LLM API call into an internal application without security review, sensitive data leaves your controlled environment instantly, often without any forensic trace.
The business pressure driving this behavior is real. Teams want AI's productivity gains now, and formal procurement processes feel slow. But unchecked Shadow AI exposes sensitive data, threatens compliance, and creates attack vectors that legacy security tools weren't designed to address.
The question is how to govern AI without killing the innovation your business needs.
What's Actually at Stake with Shadow AI
The risks of ungoverned AI fall into three categories that security leaders need to address:
Data exposure. AI agents need data to be useful—often sensitive data like customer PII, financial reports, and intellectual property. Anthropic recently disclosed an espionage campaign where attackers weaponized Claude by chaining seemingly harmless prompts to scan targets for high-value databases. Without proper controls, your AI tools become data exfiltration channels.
Autonomous actions. AI agents act on their training and prompts without human judgment about broader context. The OWASP Agentic AI Top Ten highlights this risk: an agent misinterpreting a maintenance alert could take destructive action across critical systems. As organizations deploy more autonomous AI, the blast radius of a single misconfigured agent grows.
Compliance gaps. Regulations are catching up. ISO/IEC 42001, FedRAMP updates, the EU AI Act, and state-level legislation like California SB-1047 are demanding visibility, testing, and safety controls for AI systems. Organizations without AI governance today are building compliance debt they'll pay later.
Why Platform Guardrails Fall Short
Cloud providers have responded with native guardrail capabilities. Amazon Bedrock Guardrails offers content filtering, PII redaction, prompt attack detection, and denied topic blocking. Enterprise AI platforms like Writer provide multi-model policy enforcement.
These tools address real needs: content safety, data protection at the application layer, responsible AI compliance. But platform guardrails have structural limitations for enterprise security:
They protect what you've configured, not what you don't know about. Amazon Bedrock Guardrails secure Bedrock workloads. They can't detect the ChatGPT window open on an employee's browser or the unsanctioned API call a contractor embedded in a microservice.
They operate at the application layer. Guardrails filter prompts and responses. They don't control network-level data movement, east-west traffic between AI workloads, or egress to unauthorized AI endpoints.
Multicloud complexity multiplies the problem. If your teams use Bedrock, Azure OpenAI, direct API access to Anthropic, and self-hosted models, you need separate controls for each, with no unified visibility or policy management.
Application-layer guardrails are necessary but insufficient. They're one component of a defense-in-depth strategy, not a complete solution.
Zero Trust: Governance at the Network Layer
To govern AI comprehensively, you need controls that operate below the application layer. Zero trust—the framework built on "never trust, always verify"—provides the architectural foundation. Applying zero trust principles to AI workloads means answering four questions:
Identity: What is an AI agent? The same report found that only 16% of organizations treat AI as a distinct identity type, while two-thirds have caught AI over-accessing data. Define clear identity policies for AI agents. Apply least-privilege access. Don't let agents inherit broad permissions from the human users or service accounts that invoke them.
Visibility: What is AI doing in your network? You can't govern what you can't see. Maintain monitoring across every account and workload, across clouds and regions. Focus particularly on egress visibility—that's where you stop data exfiltration before it happens.
Encryption: Can AI access unencrypted data? Even if an agent sends data somewhere it shouldn't, encryption ensures that data remains unusable. Protect data in transit across your entire network: site to cloud and workload to workload.
Segmentation: Can AI access your entire network? No agent should have unrestricted network access. Segment workloads based on security policies. Isolate AI inference services. Enforce egress policies that prevent data from reaching unauthorized endpoints. Segmentation limits the blast radius of any incident—Shadow AI or otherwise.
Final Thoughts
Trying to block AI has proven counterproductive, driving adoption further into the shadows. The goal is governance that enables safe innovation: letting your teams capture AI's productivity gains while maintaining the visibility, control, and compliance your organization requires.
That requires a layered approach. Application-layer guardrails from providers like Amazon Bedrock address content safety and prompt-level controls. Network-layer zero trust provides the identity management, visibility, encryption, and segmentation that close the gaps platform tools can't reach.
Don't wait for a breach to evaluate your AI governance posture. The organizations that get this right will innovate faster and more safely than those still treating AI security as someone else's problem.
Learn more about Aviatrix Zero Trust for Workloads: runtime protection for cloud-native workloads.
Join Us at Gartner IOCS Las Vegas!
To learn more about securing AI agents in your enterprise, join us at Gartner IOCS Las Vegas. Chris McHenry, our Head of Product, will host a session on "From Agentic Mess to Secure Agentic Mesh" on December 9.
















