✨ 2026 Futuriom 50: Key Findings and Highlights →2026 Futuriom 50: Highlights →2026 Futuriom 50: Highlights →Explore ✨
AgentGuard
Network-native AI security.
Built for security teams.
Every other AI security tool requires your application team to adopt something before you can enforce anything. AgentGuard is different. It operates transparently at the network layer — enforcing policy at the VPC boundary where every AI interaction traverses, with no SDK, no proxy configuration, and no developer compliance required.
Blast radius is determined by architecture — not by how fast you detect. Security teams own adoption. Day one.
Design Partner Access · Not a Beta
Early access means direct access to the product team throughout your deployment.
We want to understand your environment, your specific containment requirements, and where the product falls short for your use case. What you tell us shapes what ships at GA.
Early Access · Shadow AI Discovery
First discovery in 15 minutes.
Connect your cloud account. Aviatrix surfaces every AI agent, MCP server, and LLM endpoint in your environment — including shadow AI your application team doesn't know about.
No spam. Reviewed within 1 business day.
Every AI workload. Sanctioned or not.
Network-native discovery sees shadow AI that code-based tools miss entirely
Blast radius bounded by architecture
A compromised agent reaches only what it was explicitly permitted to reach
Security teams deploy this. Not developers.
No application changes, no SDK, no developer compliance required
One fabric from discovery to guardrails
Every capability runs on the Aviatrix platform you already operate
Start where you are.
Advance when ready.
Shadow AI Discovery is in early access. Network Enforcement is available today via Zero Trust for AI Workloads. Deep AI Observability and Advanced AI Guardrails ship Q3 2026. Each capability delivers standalone value on the same Aviatrix fabric — no rip-and-replace, no new infrastructure.
Find every AI workload. In 15 minutes.
Network flow + DNS + Cloud Asset Inventory. No gateway, no SDK, no code changes.
Discovers
- AI agents across EKS/AKS/GKE pods, Lambda, Azure Functions, Cloud Run, and VMs
- MCP servers and tools, mapped to the agents invoking them
- LLM providers in use — OpenAI, Anthropic, Bedrock, Vertex, Cohere, Mistral, and more
- Blast radius score per workload with guided remediation
Govern the blast radius. Before the breach.
Default-deny AI egress via Zero Trust for AI Workloads. AI-aware SmartGroups. Available today.
ai_agent resource types directly — not IP ranges. A compromised agent cannot reach a destination that was not explicitly permitted.Core capabilities
- Zero-trust egress for MCP servers — default-deny, destinations declared as policy-as-code
- Managed WebGroups — allow or deny LLM provider by name, auto-updated by Aviatrix
- URL-path scoping — differentiate github.com/acme-corp/* from github.com/* at policy level
- Full audit trail — every AI flow logged for compliance and forensics
Inspect what agents say and do.
Inline protocol parsing, prompt injection blocking, DLP — transparent, no SDK. Q3 2026.
Capabilities
- Prompt injection on user inputs and MCP tool responses — catches indirect injection
- DLP on tool_call arguments — PII, credentials, regulated data patterns
- OTEL GenAI spans to Langfuse, Datadog, Splunk, or any OTEL collector
- 20+ guardrail providers — including best-in-class AI security and content safety platforms
Find it. Route it. Govern it.
See every AI workload. In 15 minutes.
Connect a cloud account. Aviatrix analyzes VPC Flow Logs, DNS logs, and Cloud Asset Inventory via the WAPA pipeline to find every workload calling every major LLM provider — plus every Bedrock Agent and Azure AI Foundry project. No gateway deployed. No code changes. No agents on hosts. AgentGuard then analyzes the inventory and tells you exactly where enforcement should go — which workloads are highest risk, which providers are unauthorized, where a gateway needs to be deployed.
Shadow AI · High risk
dev-service-4
→ api.openai.com (unsanctioned)
MCP server · Overly broad
github-mcp-01
→ api.github.com + 12 others
Sanctioned · Low risk
prod-rag-pipeline
→ bedrock.aws.com only
The post-LLM attack surface.
Where real damage happens.
Agents uploading to Dropbox, WeTransfer, Pastebin, or out-of-org S3
Blocked by AI workload URL categories and zero-trust egress policy. Default-deny prevents any unauthorized external destination.
Runtime package installs from npm, PyPI, Docker Hub
URL-path scoping blocks package registries for agent workloads by default. Compromised dependencies cannot pull additional payloads.
Indirect injection via poisoned MCP tool responses
Network containment governs every path the agent can act on after injection — limiting blast radius before the alert fires. Inline prompt injection detection ships Q3 2026.
MCP servers with overly broad access to backend databases and SaaS
East-west segmentation enforces that agent SmartGroups can only connect to approved MCP server SmartGroups. Instance metadata endpoint blocked by default.
AWS keys, SSH keys, JWTs encoded into outbound traffic
Network containment prevents exfiltration by blocking unauthorized egress paths entirely. DLP scanning of tool_call arguments for credential patterns ships with Advanced AI Guardrails, Q3 2026.
Ungoverned agents bypassing every code-based control
Tier 1 discovery surfaces shadow AI that code-based solutions miss. Tier 2 containment enforces on every connection that traverses the network — sanctioned or not.
Security teams ship this.
No developer required.
| Dimension | AgentGuard | Requires Developer Adoption | Single-Cloud, Model-Layer Only |
|---|---|---|---|
| Who owns adoption | Security team | Application team | Application team |
| Deployment model | Transparent, no code changes | Requires SDK or proxy integration | Single-cloud, per-model |
| Shadow AI coverage | Full visibility and enforcement | Only instrumented applications | Only their own service |
| Multi-cloud | AWS, Azure, GCP, on-prem | Framework-dependent | Single cloud |
| Network context | Full — IP, VPC, workload identity, L4/L7 | Application context only | Limited |
| Inspection points | LLM response, MCP invocation, tool traffic * | Application-layer only | Model-layer only |
| Existing infrastructure | Leverages deployed Aviatrix platform | New product to deploy | New product per cloud |
LLM response inspection available today via Network Enforcement. MCP invocation and tool traffic inspection ship with Deep AI Observability and Advanced AI Guardrails, Q3 2026.
Blast radius bounded
by architecture.
Not detection speed.
Security teams shouldn't have to wait on developers to get compliant before they can enforce anything. AgentGuard inverts that. Deploy containment, guardrails, and deep observability on every AI workload — sanctioned or shadow — without a single code change.