We’ve Been Here Before 

I still remember the early days of AWS: everyone was racing to "go cloud." Speed mattered more than governance, and we all promised to fix the gaps “later.” “Later” became a pattern. 

A decade later, misconfigurations and overexposed services still sit at the top of every risk report. We industrialized bad habits. 

Now we’re doing it again with a smarter, faster generation of technology: AI, MCP, and autonomous agents. The tech changed, but the behavior hasn’t. 

The New Bandwagon 

The Model Context Protocol (MCP) standardizes how models call tools and interact with systems. It’s powerful but risky. Just like the early cloud rush, teams are deploying before securing, wiring data together, trusting defaults, and pushing to production because “it works.” The difference now is that these systems act autonomously, turning small oversights into large-scale problems. 

Common missteps include: 

  • MCP servers exposed to the internet or unsegmented internal networks. 

  • Agents given broad API permissions for testing that never get revoked. 

  • RAG pipelines that pull unvetted content into production models. 

  • DIY agent controllers storing credentials in plaintext or running open endpoints. 

These patterns aren’t malicious; they’re predictable. And predictable means preventable. 

Why DIY MCP Deployments Make It Worse 

MCP expands the attack surface through tool registries, metadata schemas, and runtime hooks. Many teams also DIY their agent workflows using YAML configs, shell scripts, or lightweight webhooks that lack authentication, signing, or sandboxing. These projects often start as quick experiments with unrealistic deadlines and big promises, where speed takes priority over safety. Security reviews get pushed aside to meet timelines, and those half-built prototypes that were never meant for production quietly become permanent fixtures in live environments. 

A common insecure pattern: a developer saves API keys in plain text inside config.yaml, runs an HTTP controller on 0.0.0.0:8080, and connects it to Slack or GitHub. It works until someone forgets it exists and an external scan finds it. 

MCP’s flexibility is a double-edged sword. DNS rebinding, SSRF-style tool calls, token leakage, and over-permissioned service accounts remain persistent threats, now accelerated by models that can chain actions automatically. 

Basic MCP Hardening Best Practices: 

  • Restrict network access: Limit MCP server access using segmentation policies. Only allow connections from approved workloads and internal networks; block external reachability entirely. 

  • Enforce authentication: Require mutual TLS or API keys for all tool and agent connections. 

  • Limit tool permissions: Assign least-privilege roles to each MCP tool or extension. 

  • Validate inputs: Sanitize tool parameters and agent responses to prevent prompt injection and command chaining. 

  • Enable logging: Capture and monitor MCP tool invocations and network requests for anomaly detection. 

  • Regularly patch dependencies: Monitor and update all MCP-related libraries and toolchains to close known vulnerabilities. 

When these controls are combined with network segmentation and CNSF enforcement, organizations can prevent most exploit paths before they start. 

The Warning Shots: EchoLeak and CurXecute — AI Is Now a Target, and Exploits Are Growing 

We’ve already seen what happens when AI systems trust input that shouldn’t be trusted. Two incidents, EchoLeak and CurXecute, show just how fast the line between innovation and exploitation can blur. 

EchoLeak weaponized markdown inside Retrieval-Augmented Generation (RAG) pipelines to quietly exfiltrate data. By embedding malicious payloads in documentation or chat messages, attackers caused AI models to retrieve and process hostile markdown. When the model rendered or summarized the content, it triggered outbound calls that leaked internal data to external servers. No malware; no alerts; just text doing damage. 

CurXecute (CVE-2025-54135), disclosed by Aim Labs, took the problem further. Cursor automatically launched MCP tools defined in ~/.cursor/mcp.json without user confirmation. Attackers used crafted public prompts to inject entries into that file, which the agent then executed locally. The result: remote code execution with the developer’s privileges, including access to SSH keys, build artifacts, and cloud credentials. 

Technically, this was a perfect storm of design flaws: 

  • Auto-execution: Cursor trusted local manifests and immediately ran new tool entries. 

  • Unvalidated writes: Suggested edits could land on disk even if a user rejected them. 

  • Privilege inheritance: Agents inherited full developer permissions, turning creative convenience into full compromise. 

The lesson? AI models don’t need binaries to be dangerous; they just need permission. 

Short-term mitigations include disabling auto-start behavior, enforcing signed tool registries, restricting write access, and requiring explicit user confirmation for any manifest execution. Long-term, adopting Cloud Native Security Fabric (CNSF) principles, runtime isolation, manifest validation, and AI-aware telemetry is how organizations contain these risks before they spread. 

When models can act, not just think, untrusted input becomes an attack surface. Treat every token like code. 

Shadow AI: The Unseen Risk 

Shadow AI isn’t malicious; it’s human. Teams chasing productivity invite “AI note-takers,” install Slack summarizers, or connect new apps that just need “one token.” No one asks where the data goes. 

These tools often: 

  • Store logs or embeddings in vendor-managed storage. 

  • Use long-lived API tokens. 

  • Expose local connectors without TLS. 

  • Lack identity mapping or auditing. 

By the time security teams notice, those apps have already touched critical data. Visibility becomes cleanup. 

The Same Problems, New Wrapping 

Old Cloud Habit 

Modern AI / MCP Equivalent 

 Public S3 buckets with sensitive data 

MCP servers bound to 0.0.0.0 

  Flat networks, no segmentation 

 Agents with unrestricted access 

  IAM chaos 

   Hardcoded tokens and excessive privileges 

  Shadow IT 

 Shadow AI 

  "Lift and shift" everything 

 "Prompt and deploy" everything 

Different technology . . . but the same outcome. 

Expanding the Threat Model 

While the focus so far has been on misconfiguration and poor segmentation, attackers are exploiting the full AI attack surface. Security teams must also prepare for: 

  • Data poisoning targeting training pipelines, where malicious samples alter model behavior or leak sensitive information. 

  • Model extraction and IP theft, where attackers replicate or fine-tune models using stolen queries or outputs. 

  • Supply chain compromise in AI tool dependencies, allowing adversaries to inject malicious libraries or MCP plugins. 

  • Insider misuse, where autonomous agents amplify the impact of a single compromised or careless user. 

Quantitatively, AI and MCP systems expose unique risks: an exposed RAG pipeline can leak gigabytes of indexed data in minutes, and model replication attacks can reproduce proprietary weights in hours. Exposure timelines are shrinking as automation accelerates both discovery and exploitation. 

To counter this, security must combine runtime visibility with proactive network control and behavioral telemetry. That’s where the next section comes in. 

Aviatrix Cloud Native Security Fabric (CNSF): Precision, Visibility, and Network-First Defense 

Aviatrix Cloud Native Security Fabric (CNSF) provides what most AI security strategies lack: network-layer control, visibility, and enforcement. AI systems evolve quickly, but their data still flows through infrastructure that can be segmented, inspected, and governed. CNSF turns network awareness into active defense. 

Control the Flows

CNSF applies fine-grained, context-aware network policies across multicloud environments. Instead of flat connectivity or implicit trust, it builds adaptive microsegments for each AI workload. Policies evolve in real time based on identity, purpose, and data classification. 

Key outcomes:

  • Enforce east-west and north-south boundaries for MCP traffic. 

  • Contain agent-to-agent communication based on identity and purpose. 

  • Deny unapproved external LLM access while allowing approved connectors. 

Gain Visibility

You can’t secure what you can’t see. CNSF gives unified visibility into AI agent communications, data movement, and policy enforcement across clouds. 

Capabilities:

  • Real-time visualization of AI and MCP network traffic. 

  • Behavioral baselining and anomaly detection for agent activity. 

  • Centralized telemetry integrated with SIEM and XDR platforms. 

Reduce the Attack Surface

CNSF automates core network security controls, segmentation, least privilege, encryption, and route inspection. Every connection is intentional and auditable. 

Results: 

  • MCP servers isolated from public access. 

  • Shadow AI traffic detected and contained. 

  • Runtime data movement visible, controlled, and enforceable. 

Why It Matters

Aviatrix CNSF doesn’t slow innovation; instead, it enables it securely. By protecting the connective layer between agents, data, and clouds, CNSF helps organizations innovate safely and at scale. 

Closing: The Operational Shift

Let’s stop pretending this time will be different. We’ve seen this cycle before: hype, shortcuts, fallout. The tech evolves, but the habits remain. 

AI moves faster and with fewer guardrails. The machines run at the speed we let them, and right now, we’re flooring it without brakes. You can hope this time will be different, or you can make it different: map your flows, lock your paths, and segment your networks. Reduce your attack surface before someone uses it against you. 

When the next wave hits, it’ll be inaction, not ignorance, that costs you.

 

Explore how Aviatrix CNSF enforces zero trust for AI workloads. 

Schedule a demo to see how CNSF uses network segmentation, High Performance Encryption, and centralized visibility to protect data.

Matt Snyder
Matt Snyder

Principal Engineer/Lead - Detection and Response, Aviatrix, Inc.

Matt leads lead Detection & Response efforts at Aviatrix, working closely with internal security teams and external partners to identify, investigate, and respond to potential threats. His role spans strategic oversight and hands-on execution to ensure a strong security posture across complex, distributed environments.

PODCAST

Altitude

subscribe now

Keep Up With the Latest From Aviatrix

Cta pattren Image