2026 Futuriom 50: Highlights →Explore

Executive Summary

Cloud security failures are rarely the result of perimeter exposure alone. In Kubernetes environments, the true blast radius is defined by east–west trust between workloads. In this proof of concept, we show how default Kubernetes networking and identity behavior allow lateral movement and service account token misuse without exploits, privilege escalation, or control plane access.

Through a hands-on, reproducible lab, this article shows how a low-value workload can discover and access internal services, impersonate trusted callers using ambient identity, and how a single containment control can immediately collapse the entire attack chain.

The Anatomy of East–West Abuse in Kubernetes Environments

Focus areas: Kubernetes east–west traffic, internal lateral movement, workload-to-workload trust, and identity misuse.

Why This Matters

Cloud-native architectures optimize for connectivity, speed, and resilience. In doing so, they frequently inherit an implicit assumption: that traffic inside the environment is trustworthy.

This assumption fails in modern Kubernetes-based workloads, where east–west communication paths routinely enable silent lateral movement using only legitimate mechanisms.

What follows is a command-level walkthrough showing how a single compromised workload can expand access horizontally across a cluster without exploiting vulnerabilities, escalating privileges, or triggering traditional security alerts. A real-world failure pattern illustrates how this plays out in practice, followed by concrete recommendations focused on deterministic containment rather than detection-first defense.

The Foundational Assumption: Internal Traffic Is Safe

Internal network location is widely treated as a proxy for trust.

Most cloud environments implicitly trust east–west traffic. Once a request originates from inside, it is often treated as legitimate by default. This assumption is reinforced by:

  • Flat virtual networking

  • Permissive default Kubernetes connectivity

  • Internal APIs designed around location-based trust

  • Service accounts treated as ambient identity

  • Security tooling biased toward ingress and control plane visibility

While perimeter defenses have matured, internal trust boundaries remain loosely defined or entirely absent.

What East–West Traffic Really Looks Like in Kubernetes

Kubernetes enables connectivity by default but does not encode intent.

In a default Kubernetes deployment:

  • Pods can communicate freely across namespaces

  • Services abstract away pod-level identity

  • DNS exposes internal application topology

  • Network connectivity substitutes for authorization

Kubernetes defines how workloads connect, not whether they should. Connectivity exists by default. Intent does not.

This gap is architectural. There is no native enforcement point between service discovery and service access that validates whether a given workload should be communicating with another.

Attack Walkthrough: East–West Abuse Without Exploits

This section provides a Kubernetes lateral movement walkthrough using default networking and service account behavior.

The following walkthrough traces the attacker’s actions step by step, highlighting where trust assumptions replace enforcement.

Scope and Assumptions

This walkthrough models a realistic production environment:

  • Kubernetes cluster in a cloud VPC

  • Flat pod-to-pod networking

  • Internal services exposed via Service objects

  • Default service account token mounting

  • No enforced east–west segmentation

No kernel exploits, container escapes, control-plane compromise, or privilege escalation are used. Every step relies on intended platform behavior.

Threat Model and Non-Goals

This analysis focuses on the most common and operationally realistic failure mode in Kubernetes environments: compromise of a single workload followed by horizontal expansion.

Out of scope for this walkthrough:

  • Control plane compromise

  • Cloud account takeover

  • Kernel-level exploits

  • Container escape techniques

  • Supply-chain poisoning or image backdoors

These scenarios are serious but comparatively rare. The goal here is to model what happens when a routine application compromise occurs inside a production cluster and encounters unconstrained east–west trust.

East–West Trust Failures in Kubernetes

Reference Architecture (Conceptual)

The visual above represents the environment built in the lab and the trust relationships being exercised.

(All traffic permitted by default) Every arrow above exists by default in Kubernetes unless explicitly restricted.

After containment:

(Default deny ingress enforced)

This document combines a technical analysis of east–west trust failures in Kubernetes with a fully reproducible, step-by-step lab that demonstrates the behavior end to end.

The sections that follow move from analysis into a hands-on validation that reproduces these behaviors using default Kubernetes primitives.

It shows how default Kubernetes networking and identity behavior enable lateral movement and service account token misuse, and how a single containment control can collapse the attack instantly.

East–West Trust Lab: Kubernetes Identity Misuse

A hands-on, reproducible lab demonstrating how default Kubernetes networking and identity behaviors enable east–west lateral movement and service account token misuse, and how containment collapses the attack instantly.

This lab is designed to accompany the blog post:

"When Cloud Workloads Trust Too Much"

Why This Failure Is Normal

Nothing demonstrated in this article or lab relies on obscure misconfiguration, vulnerable software, or elevated privileges. Every behavior emerges from common and widely accepted architectural patterns: internal-only services, trusted east–west networks, and bearer-token authentication without intent validation.

These patterns exist because they are convenient, scalable, and often encouraged by default platform behavior. The risk is not that teams configure Kubernetes incorrectly, but that they rely on implicit trust boundaries that do not actually exist.

Lab Goals

By completing this lab, you will:

  • Observe default east–west connectivity in Kubernetes

  • Discover internal services through Kubernetes DNS

  • Abuse implicit trust using a valid service account token

  • Demonstrate identity misuse without exploits or privilege escalation

  • Apply a single containment control and break the entire attack chain

Environment Requirements

  • macOS

  • Docker Desktop with Kubernetes enabled

  • kubectl installed and configured for the Docker Desktop cluster

  • Ability to pull container images from Docker Hub

Before starting, verify Docker can pull images:

docker pull alpine

Lab Walkthrough

This lab is intentionally linear. Each step includes the exact YAML required so the behavior can be reproduced without jumping between sections.

Step 1 — Namespace Isolation

Create an isolated namespace for the lab.

apiVersion: v1  kind: Namespace  metadata:   name: eastwest-lab 

Apply and scope kubectl:

kubectl apply -f namespace.yaml  kubectl config set-context --current --namespace=eastwest-lab  kubectl get pods 

Expected output:

No resources found in eastwest-lab namespace.

Step 2 — Deploy Internal East–West Services

Deploy three internal services that represent trusted east–west APIs. These services are ClusterIP only and have no authentication.

orders-api

apiVersion: apps/v1  kind: Deployment  metadata:   name: orders-api  spec:   replicas: 1   selector:     matchLabels:       app: orders-api   template:     metadata:       labels:         app: orders-api     spec:       containers:  name: orders         image: hashicorp/http-echo         args: ["-text=orders service"]         ports:  containerPort: 5678  ---  apiVersion: v1  kind: Service  metadata:   name: orders-api  spec:   selector:     app: orders-api   ports:  port: 8080     targetPort: 5678 

Apply:

kubectl apply -f orders.yaml 

payments-api

apiVersion: apps/v1  kind: Deployment  metadata:   name: payments-api  spec:   replicas: 1   selector:     matchLabels:       app: payments-api   template:     metadata:       labels:         app: payments-api     spec:       containers:  name: payments         image: hashicorp/http-echo         args: ["-text=payments service"]         ports:  containerPort: 5678  ---  apiVersion: v1  kind: Service  metadata:   name: payments-api  spec:   selector:     app: payments-api   ports:  port: 8080     targetPort: 5678 

Apply:

kubectl apply -f payments.yaml

auth-api

apiVersion: apps/v1  kind: Deployment  metadata:   name: auth-api  spec:   replicas: 1   selector:     matchLabels:       app: auth-api   template:     metadata:       labels:         app: auth-api     spec:       containers:  name: auth         image: hashicorp/http-echo         args: ["-text=auth service"]         ports:  containerPort: 5678  ---  apiVersion: v1  kind: Service  metadata:   name: auth-api  spec:   selector:     app: auth-api   ports:  port: 8080     targetPort: 5678 

Apply and verify:

kubectl apply -f auth.yaml  kubectl get pods  kubectl get svc 

Step 3 — Deploy a Low-Value Worker Pod

Deploy a generic worker pod that represents a low-importance internal workload. It uses the default service account and exposes no services.

apiVersion: v1  kind: Pod  metadata:   name: worker  spec:   containers:  name: worker     image: alpine     command:  sh  -c  |       apk add --no-cache curl bind-tools &&       sleep 3600 

Apply:

kubectl apply -f worker.yaml  kubectl get pods 

Step 4 — East–West Discovery and Access

This step corresponds directly to the trust relationships shown in the reference architecture above. At this point in the lab, no containment or authorization boundaries exist between workloads.

Enter the worker pod:

kubectl exec -it worker -- sh 

Discover internal services via DNS:

nslookup orders-api  nslookup payments-api  nslookup auth-api 

Access each internal service directly:

curl http://orders-api:8080  curl http://payments-api:8080  curl http://auth-api:8080 

This demonstrates unrestricted east–west connectivity by default.

Step 5 — Identity Exposure and Token Misuse

Deploy an internal API that accepts any bearer token and performs no authorization checks.

apiVersion: apps/v1  kind: Deployment  metadata:   name: internal-api  spec:   replicas: 1   selector:     matchLabels:       app: internal-api   template:     metadata:       labels:         app: internal-api     spec:       containers:  name: api         image: python:3.11-slim         command: ["sh", "-c"]         args:  |           pip install flask && \           python - <<'PY'           from flask import Flask, request           app = Flask(__name__)           @app.route("/internal/admin")           def admin():               auth = request.headers.get("Authorization")               if auth:                   return "authorized via token\n", 200               return "missing token\n", 401           app.run(host="0.0.0.0", port=8080)           PY         ports:  containerPort: 8080  ---  apiVersion: v1  kind: Service  metadata:   name: internal-api  spec:   selector:     app: internal-api   ports:  port: 8080     targetPort: 8080 

Apply:

kubectl apply -f internal-api.yaml  kubectl get pods 

From the worker pod:

curl http://internal-api:8080/internal/admin  TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)  curl -H "Authorization: Bearer $TOKEN" http://internal-api:8080/internal/admin 

Expected result:

authorized via token

Kubernetes service account tokens assert identity, not intent. Without explicit authorization checks, identity alone becomes a universal skeleton key for any service that equates internal with trusted.

Step 6 — Containment and Attack Collapse

Apply a default-deny NetworkPolicy.

apiVersion: networking.k8s.io/v1  kind: NetworkPolicy  metadata:   name: default-deny  spec:   podSelector: {}   policyTypes:  Ingress 

Apply and re-test:

kubectl apply -f default-deny.yaml  kubectl exec -it worker -- sh  curl -H "Authorization: Bearer $TOKEN" http://internal-api:8080/internal/admin 

The request fails, collapsing the attack chain.

Failure Case Study: East–West Trust as an Incident Multiplier

A cloud-native SaaS provider experienced a security incident following compromise of a non-critical Kubernetes workload. The initial foothold was unremarkable: application-level remote code execution in an internal batch-processing service.

What followed was not a traditional escalation. Instead, the attacker expanded access horizontally using legitimate east–west service calls authenticated with valid service account tokens. Internal APIs trusted requests originating from inside the cluster and accepted any valid cluster-issued identity.

No perimeter defenses failed. No vulnerabilities were exploited beyond the initial foothold. Detection did not occur through security tooling, but through secondary application anomalies as unauthorized internal actions accumulated.

Incident response was delayed by uncertainty around service dependencies. Responders hesitated to apply broad containment controls due to fear of disrupting production workloads. By the time segmentation was enforced, the blast radius had already expanded across multiple internal systems.

The root cause was not lack of tooling, but implicit trust combined with the absence of enforceable service-to-service intent.

Why Traditional Detection Fails for Kubernetes East–West Traffic

Visibility without intent produces ambiguity, not action.

In east–west attack paths, defenders are rarely blind. They are uncertain.

Detection Signals (Why They Don’t Save You)

Most of the activity demonstrated in this article produces signals that are technically observable but operationally inconclusive:

  • VPC flow logs show permitted internal connections without workload intent

  • Kubernetes audit logs capture control-plane activity only

  • Runtime security tools see legitimate process execution

  • IDS observes valid APIs called with valid tokens

These signals are often reviewed after an incident. In real time, they resemble normal service behavior. Without enforced intent, detection produces debate rather than response.

Detection Ideas (Generic Sigma-Style Examples)

The examples below are intentionally generic. They are not production-ready rules, but thought starters designed to help teams reason about where detection could exist even if containment is absent.

Unusual Service Account Token Usage Across Services

title: Kubernetes Service Account Token Used Across Multiple Services  description: Detects a single service account token being used to access multiple internal services within a short time window.  logsource:   category: application   product: kubernetes  detection:   selection:     http.request.headers.authorization: "Bearer *"   condition: selection | count(http.target.service) > 3 within 5m  level: medium 

Why it helps:

  • Highlights horizontal expansion without privilege escalation

  • Often visible only after correlation

Why it falls short:

  • Legitimate batch jobs and service meshes can look similar

  • Requires strong service attribution in logs

New East–West Communication Path

title: New East-West Service Communication Path  description: Detects a workload initiating traffic to an internal service it has not previously contacted.  logsource:   category: network   product: kubernetes  detection:   selection:     network.direction: internal   condition: selection and not previously_seen(source.pod, destination.service)  level: high 

Why it helps:

  • East–west paths tend to be stable over time

  • New paths are often meaningful

Why it falls short:

  • Requires historical baselining

  • Breaks down in highly dynamic environments

Internal API Access Using Default Service Account

title: Internal API Access Using Default Service Account  description: Detects access to sensitive internal APIs using the default Kubernetes service account.  logsource:   category: application   product: kubernetes  detection:   selection:     kubernetes.serviceaccount.name: default     http.target.path: "/internal/*"   condition: selection  level: high 

Why it helps:

  • Default service account usage is often unintended

  • Surfaces weak identity boundaries

Why it falls short:

  • Requires consistent identity propagation

  • Does not prevent misuse on its own

These examples reinforce the central theme of this article: detection can highlight symptoms, but without enforceable intent and containment, responders are left interpreting signals instead of stopping impact.

Red Team and Blue Team Perspectives

Red Team View

From an attacker’s perspective, east–west trust dramatically lowers operational cost. Service discovery, authentication artifacts, and internal APIs are all provided by the platform. Movement appears indistinguishable from normal application behavior and requires no exploits or noisy techniques.

The path of least resistance is lateral expansion, not privilege escalation.

Blue Team View

From a defensive standpoint, the activity blends into expected traffic. Network flows are legitimate. Tokens are valid. Control plane logs are quiet. Without deterministic containment controls, defenders are left debating intent rather than taking action.

Detection alone produces ambiguity. Containment produces clarity.

Why Containment Wins Operationally in Kubernetes Security

Detection-based response assumes time: time to investigate, time to correlate signals, time to gain confidence. East–west expansion happens faster.

Containment changes the operating model:

  • Time-to-detect becomes less critical than time-to-isolate

  • Blast radius is mechanically limited

  • Responders act on policy, not interpretation

A contained compromise remains an incident. An unconstrained one becomes a crisis.

What This Does Not Solve

This lab intentionally focuses on containment to demonstrate how quickly blast radius can be reduced. It does not claim that NetworkPolicy alone is sufficient for securing Kubernetes environments.

Containment does not replace:

  • Proper service-to-service authorization

  • Token audience and scope validation

  • Application-layer authentication

  • Runtime detection and response

Instead, containment limits how far failures propagate when those controls are missing or bypassed.

Key Takeaways for Kubernetes and Cloud Workload Security

  • Kubernetes defaults prioritize connectivity over intent

  • Service account tokens assert identity, not authorization

  • East–west trust defines blast radius

  • Detection alone does not prevent lateral movement

  • Enforced containment changes outcomes instantly

Cleanup (Kubernetes Lab Teardown)

To remove all lab resources:

kubectl delete namespace eastwest-lab 

Notes for Practitioners

  • The APIs in this lab are intentionally insecure for demonstration purposes

  • Real environments often fail in more subtle ways

  • NetworkPolicy is a mechanism, not a complete strategy

  • The core lesson is enforcing service-to-service intent

YAML Manifests

The following manifests are used in this lab. They are intentionally minimal and rely on default Kubernetes behavior to demonstrate the security properties discussed in the article.

namespace.yaml

apiVersion: v1  kind: Namespace  metadata:   name: eastwest-lab 

orders.yaml

apiVersion: apps/v1  kind: Deployment  metadata:   name: orders-api  spec:   replicas: 1   selector:     matchLabels:       app: orders-api   template:     metadata:       labels:         app: orders-api     spec:       containers:  name: orders         image: hashicorp/http-echo         args: ["-text=orders service"]         ports:  containerPort: 5678  ---  apiVersion: v1  kind: Service  metadata:   name: orders-api  spec:   selector:     app: orders-api   ports:  port: 8080     targetPort: 5678 

payments.yaml

apiVersion: apps/v1  kind: Deployment  metadata:   name: payments-api  spec:   replicas: 1   selector:     matchLabels:       app: payments-api   template:     metadata:       labels:         app: payments-api     spec:       containers:  name: payments         image: hashicorp/http-echo         args: ["-text=payments service"]         ports:  containerPort: 5678  ---  apiVersion: v1  kind: Service  metadata:   name: payments-api  spec:   selector:     app: payments-api   ports:  port: 8080     targetPort: 5678 

auth.yaml

apiVersion: apps/v1  kind: Deployment  metadata:   name: auth-api  spec:   replicas: 1   selector:     matchLabels:       app: auth-api   template:     metadata:       labels:         app: auth-api     spec:       containers:  name: auth         image: hashicorp/http-echo         args: ["-text=auth service"]         ports:  containerPort: 5678  ---  apiVersion: v1  kind: Service  metadata:   name: auth-api  spec:   selector:     app: auth-api   ports:  port: 8080     targetPort: 5678 

worker.yaml

apiVersion: v1  kind: Pod  metadata:   name: worker  spec:   containers:  name: worker     image: alpine     command:  sh  -c  |       apk add --no-cache curl bind-tools &&       sleep 3600 

internal-api.yaml

apiVersion: apps/v1  kind: Deployment  metadata:   name: internal-api  spec:   replicas: 1   selector:     matchLabels:       app: internal-api   template:     metadata:       labels:         app: internal-api     spec:       containers:  name: api         image: python:3.11-slim         command: ["sh", "-c"]         args:  |           pip install flask && \           python - <<'PY'           from flask import Flask, request           app = Flask(__name__)             @app.route("/internal/admin")           def admin():               auth = request.headers.get("Authorization")               if auth:                   return "authorized via token\n", 200               return "missing token\n", 401             app.run(host="0.0.0.0", port=8080)           PY         ports:  containerPort: 8080  ---  apiVersion: v1  kind: Service  metadata:   name: internal-api  spec:   selector:     app: internal-api   ports:  port: 8080     targetPort: 8080 

default-deny.yaml

apiVersion: networking.k8s.io/v1  kind: NetworkPolicy  metadata:   name: default-deny  spec:   podSelector: {}   policyTypes:  Ingress 

Appendix: Compliance and Control Mapping for Kubernetes Security (Optional Reference)

The controls demonstrated in this article support common regulatory and security frameworks, though they are not driven by compliance requirements.

  • NIST SP 800-53: AC-4 (Information Flow Enforcement)

  • CIS Kubernetes Benchmark: Network segmentation and least privilege

  • PCI DSS: Restrict internal traffic to required services only

  • Zero Trust Architecture: Explicit trust relationships and continuous enforcement

Compliance alignment is a byproduct of good architecture, not the objective.

Matt Snyder
Matt Snyder

Principal Engineer/Lead - Detection and Response, Aviatrix, Inc.

Matt leads lead Detection & Response efforts at Aviatrix, working closely with internal security teams and external partners to identify, investigate, and respond to potential threats. His role spans strategic oversight and hands-on execution to ensure a strong security posture across complex, distributed environments.

PODCAST

Altitude

Secure The Connections Between Your Clouds and Cloud Workloads

Leverage a security fabric to meet compliance and reduce cost, risk, and complexity.

Cta pattren Image