The Containment Era is here. →Explore

The most disruptive technologies in enterprise history did not ask permission. The iPhone arrived through usage rather than through a CIO strategy deck. Cloud followed the same path. In each case, security architecture had to adapt after behavior had already changed. AI is moving through enterprises the same way, with one important twist: the technology being adopted behaves less like a tool and more like a user. Agents reason, retrieve, generate, and act, opening connections to systems and services at machine speed and across thousands of identities at once.

AI is the first non-human (or workload) technology that behaves like a user inside your network.

The operating assumption of the Containment Era is that something in your environment is compromised right now. The question is what those workloads can reach when an attack arrives. The Cascade answered that question in March 2026, when a coordinated supply chain campaign compromised five major software ecosystems in twelve days through trusted code, channels, and update mechanisms without generating a single anomalous signal. When prevention fails and detection is too slow, containment decides whether the incident becomes a catastrophic breach.

The Gap between Knowing and Deploying

The earliest version of containment for AI does not require perfection. It can begin with destinations no agent should ever reach: shareware sites, untrusted package registries, and arbitrary GitHub repositories. For tools whose risk profile is moderately uncertain, the simplest step is denying outbound internet access and permitting only the specific endpoints the workflow requires. None of that depends on AI-aware inspection. It depends on basic, well-placed network policy that customers can stand up in days. Anthropic has documented automated nation-state-grade attacks where agents, given the right tools, ran reconnaissance, lateral movement, and exfiltration with little human direction. Agents amplify whatever tools they reach, and the network is what bounds them.

The most reliable way to understand what an agent is actually doing is to observe it from outside, on the network, rather than from inside, in its own logs. Agent logs are written by the agent, which means an organization that depends on them is trusting the same component it is trying to verify. The network is an independent observer the agent does not control.

The network does not lie.

Containment is also not a single thing. It is a layered set of capabilities, and progress on each layer is independently valuable:

  • Basic network guardrails that prevent agents from reaching destinations they have no business reaching, deployable in days.

  • More aggressive Zero Trust controls that scope the highest-risk agents to a narrow set of services and identities, adding depth where it is needed most.

  • Deep analysis and content filtering, where decryption and AI-aware inspection allow decisions on the meaning of traffic rather than only its destination.

Security Despite AI Platform Proliferation

Knowing what containment requires is one thing. Deploying it across the platforms your organization actually runs is another. Every platform has its own insertion model, identity surface, and network topology. AWS Bedrock AgentCore resolves workload identity differently than Azure AI Foundry. An Obot Model Context Protocol gateway on Elastic Kubernetes Service requires a different SmartGroup taxonomy than a Microsoft Model Context Protocol Gateway in an Azure virtual network. Building that knowledge from scratch costs weeks per platform, and AI infrastructure is not waiting.

A common industry response is to centralize everything through a single AI gateway. We do not believe that is where most organizations arrive. Mandates that funnel traffic through one choke point get worked around when friction outweighs benefit, and pushing usage underground reduces the visibility every other control depends on. The deeper concern is vendor lock-in. The agentic platform may be one of the largest single lock-in levers in the history of enterprise technology. Once workflows are wired into a platform’s runtime, identity model, and tool definitions, migrating becomes enormously expensive, and if that platform is delivering a twenty percent productivity gain, the appetite for ripping it out is close to zero.

The agentic platform may be one of the largest single lock-in levers in the history of enterprise technology.

A healthy enterprise architecture preserves the option to evaluate and run more than one agentic platform, which means each must be secured to the same standard, on the same fabric, by the same team. Aviatrix has been doing exactly this work for a decade: standardized network security across multiple public clouds when the industry assumed every cloud needed its own stack, and the same again for Kubernetes when containerized workloads outgrew the controls built for VMs. Validated Containment Architectures apply that approach to agent platforms.

What Validated Containment Architectures Are

Starting May 27, 2026, Aviatrix is publishing a new Validated Containment Architecture every week. Each is a lab-tested deployment blueprint for a specific AI platform, designed to deploy against a single landing zone without rip-and-replace. The customer never modifies agent code. The insertion pattern wraps each platform with Aviatrix Distributed Cloud Firewall using its own networking primitives. Where deeper inspection is needed, customers can optionally enable decryption to establish trust between Aviatrix and the agents, which becomes the foundation for the embedded Advanced AI Guardrails shipping this summer in partnership with industry-leading guardrail providers.

Each architecture ships with five components:

  • Insertion pattern: an architecture diagram with configuration notes showing where Aviatrix Distributed Cloud Firewall enforcement goes within the platform’s topology.

  • SmartGroup model: a tagging taxonomy that translates platform-native identity primitives (Bedrock Agent identifiers, AI Foundry project identities, Obot namespace labels) into a uniform vocabulary policy can be written against.

  • Baseline policy pack: a starting set of Distributed Cloud Firewall rules pre-scoped to what the platform actually needs in normal operation.

  • Clear pricing: pricing for Aviatrix and quantities for other components in the VCA.

  • Terraform repository: infrastructure-as-code built on the Aviatrix Blueprint standard, standing up both the Aviatrix configuration and the agent platform itself, often with sample agents to accelerate testing.

Eight Architectures in Eight Weeks

The initial program covers the eight AI platforms with the largest enterprise footprint, with three shipping simultaneously on May 27 and the remainder following weekly through July 1.

  • VCA 01, AWS Bedrock AgentCore. Covers the new Bedrock Managed Agents service. By default Bedrock agents have broad network egress and compromised traffic is indistinguishable from legitimate calls. The architecture delivers VPC boundary enforcement that permits only approved Large Language Model providers and vector databases.

  • VCA 02, AI Foundry agents on Azure. Foundry projects classify as high-risk systems under the EU AI Act, and documented network-layer enforcement is part of demonstrating compliance. The architecture keys on Azure project identities through Azure Resource Graph, with insertion patterns specific to Azure VNet topology.

  • VCA 03, enterprise Model Context Protocol via Obot. The OWASP MCP Top Ten has documented the blast radius problem: a compromised tool response can poison an agent’s context and redirect its behavior. The architecture delivers Firewall Policy Custom Resource Definitions for the Obot Helm chart, scoping each server to only the external APIs it declared.

The architectures that follow cover GitHub Runners and CICD Pipelines, LibreChat, NVIDIA NemoClaw and OpenClaw, Google Vertex AI Agents, and Microsoft Model Context Protocol Gateway. Agent platforms are evolving at extraordinary speed, and the cadence of these releases reflects that.

The Deployment Layer that Makes Containment Real

Workload and agentic identity is the foundation of effective containment, both proactively and during incident response. Many AI workloads are containerized, serverless, or ephemeral, exactly the population traditional perimeter and chokepoint controls struggle to see. A chokepoint that requires hairpinning is blind to a workload that exists for two minutes and disappears. Aviatrix governs communication at every workload, across every cloud, on every path. WebGroups know where AI providers live, SmartGroups know what each workload is, and Distributed Cloud Firewall combines them into one rule per workload per destination, enforced at the VPC boundary where Kubernetes, serverless, and ephemeral egress actually happen.

If you had to, could you block a known indicator of compromise across your entire cloud environment immediately?

On premises, that was a routine response. In the cloud, with workloads and egress paths fragmented across hundreds of accounts and many platforms, it has become nearly impossible because the legacy controls did not evolve to match the new topology. That gap became visible during the LiteLLM supply chain compromise in March 2026, when one of our Fortune Global 500 customers added four command-and-control IP addresses to a Global IP Blocklist already running in Aviatrix Distributed Cloud Firewall. The update propagated to every gateway, VPC, region, and Kubernetes environment simultaneously. Zero credentials were exfiltrated. Other organizations watched the same campaign run to completion in under three hours.

Containment is not a Future State

The most useful posture for security teams in this moment is not to slow AI adoption. It is to allow the organization to say yes, securely, and to build paved roads that make secure adoption easier than insecure adoption. The advantage belongs to organizations that redesign enforcement for the way AI actually moves through their environment: messily, on multiple platforms, often before central IT learns about it.

Saying yes, securely, means building paved roads that make secure adoption easier than insecure adoption.

If a compromised agent attempted to exfiltrate credentials right now, would your architecture stop it, or would you read about it in tomorrow’s log? The first Validated Containment Architectures ship May 27, lab-tested before they arrive and deployable the day they ship. Customers already running Aviatrix Distributed Cloud Firewall receive each new blueprint on the fabric they already operate. Others can begin with a single landing zone and a single platform, see the result in production, and move from there.

AI is the first technology that behaves like a user inside your network, and the network is the only observer that can credibly report on what it does. Download the first Validated Containment Architecture on May 27.

Learn more about Zero Trust for AI Workloads or Aviatrix AgentGuard, which provide default-deny, network-level enforcement for AI Workloads.

Frequently Asked Questions

AI agents don't just process data. They reason, retrieve information, generate content, and take actions across systems at machine speed. They behave more like users than tools, opening connections to services and operating across thousands of identities simultaneously. Traditional security controls were designed for predictable software behavior, not for workloads that autonomously decide what to access. Because agents amplify whatever tools they can reach, the network boundaries around them determine how much damage a compromised agent can do.

An Aviarix Validated Containment Architecture is a lab-tested deployment blueprint for securing a specific AI platform. Each one is designed to work with a single landing zone without requiring organizations to rip out and replace existing infrastructure. Validated Containment Architectures ship with five components: an insertion pattern showing where firewall enforcement goes, a SmartGroup tagging model, a baseline policy pack, clear pricing, and a Terraform repository for infrastructure-as-code deployment.

Agent logs are written by the agent itself. That means relying on them requires trusting the same component you're trying to verify. If an agent is compromised, its logs may be unreliable or manipulated. The network, by contrast, is an independent observer that the agent does not control. Monitoring traffic from outside the agent gives security teams a trustworthy view of what the agent is actually doing, including what destinations it contacts and what data it sends, without depending on the agent to self-report accurately.

Centralizing everything through one AI gateway sounds simple, but it tends to create friction that pushes usage underground. When teams work around a chokepoint, security loses the visibility it needs. There's also a significant lock-in risk. Once workflows are wired into a single platform's runtime, identity model, and tooling, migration becomes extremely expensive. A healthier approach lets organizations evaluate and run multiple agentic platforms, each secured to the same standard on the same fabric by the same team.

Start with the basics. Block destinations no agent should ever reach, such as shareware sites, untrusted package registries, and arbitrary repositories. For tools with uncertain risk profiles, deny outbound internet access and allow only the specific endpoints each workflow requires. These steps don't require AI-aware inspection. They depend on well-placed network policy that teams can deploy in days. From there, organizations can layer on Zero Trust controls for high-risk agents and eventually add deeper inspection capabilities as they mature.

Secure The Connections Between Your Clouds and Cloud Workloads

Leverage a security fabric to meet compliance and reduce cost, risk, and complexity.

Cta pattren Image