Software Supply Chain Security — Deep Dive
Software Supply Chain Security — Deep Dive
[MODE: ARCHITECT] | CIPHER Training Module
Last updated: 2026-03-14
Table of Contents
- Threat Landscape: Notable Supply Chain Attacks
- Attack Taxonomies
- SLSA Framework (v1.0)
- Sigstore Ecosystem
- SBOM Formats and Tooling
- Vulnerability Scanning
- Secret Scanning
- CI/CD Pipeline Security
- Container Supply Chain
- Dependency Analysis Platforms
- OpenSSF Scorecard
- Reproducible Builds
- Binary Authorization
- Ecosystem-Specific Risks
- Implementation Playbook
1. Threat Landscape: Notable Supply Chain Attacks
SolarWinds Orion (2020) — T1195.002
Vector: Compromised build system. UNC2452/Nobelium inserted the SUNBURST backdoor into SolarWinds Orion build pipeline. The malicious code was compiled into legitimate, digitally signed updates distributed to ~18,000 organizations including US government agencies.
Key lessons:
- Build system integrity is paramount — signing does not imply safety if the build itself is compromised
- SLSA L3 isolation would have made this significantly harder (build step isolation, unforgeable provenance)
- Detection required behavioral analysis of DNS beaconing to avsvmcloud[.]com, not signature-based scanning
MITRE: T1195.002 (Supply Chain Compromise: Compromise Software Supply Chain), T1071.004 (DNS C2)
Codecov Bash Uploader (2021) — T1195.002
Vector: Compromised CI/CD script. Attackers modified the Codecov bash uploader script hosted on codecov.io to exfiltrate environment variables (including CI secrets, tokens, keys) from customer CI pipelines.
Key lessons:
- Pinning scripts by hash, not URL, prevents this class of attack
- CI environment variables contain crown jewels — treat CI runners as high-value targets
- Egress monitoring on CI runners would have detected the exfiltration
ua-parser-js (2021) — T1195.001
Vector: Account takeover of npm maintainer. Attacker published malicious versions (0.7.29, 0.8.0, 1.0.0) containing a cryptominer and credential stealer. Package had ~8M weekly downloads.
Key lessons:
- npm accounts without 2FA are single points of failure for the entire downstream ecosystem
- Automated dependency update tools (Dependabot, Renovate) can auto-merge malicious versions
- Lock files and version pinning provide the first line of defense
event-stream (2018) — T1199
Vector: Social engineering of maintainer. Attacker gained commit access to event-stream by offering to maintain the abandoned package, then added flatmap-stream as a dependency containing an encrypted payload targeting the Copay Bitcoin wallet.
Key lessons:
- Maintainer succession is a supply chain risk — there is no vetting process in most ecosystems
- Obfuscated/encrypted payloads in dependencies should trigger review
- Transitive dependency attacks exploit the trust chain
Log4Shell / CVE-2021-44228 (2021) — T1190
Vector: Vulnerability in ubiquitous logging library. JNDI lookup feature in Log4j2 allowed RCE via crafted log messages. Affected virtually every Java application.
Key lessons:
- SBOM generation enables rapid identification of affected systems during zero-day events
- Transitive dependency visibility is critical — most organizations did not know they used Log4j
- The vulnerability existed for years in a widely-used, under-resourced open source project
3CX Desktop App (2023) — T1195.002
Vector: Cascading supply chain compromise. North Korean actors (Lazarus/UNC4736) first compromised Trading Technologies' X_TRADER software, used stolen credentials to access 3CX's build environment, then trojanized the 3CX desktop application — a supply chain attack originating from a prior supply chain attack.
Key lessons:
- Supply chain attacks can chain — one compromised vendor becomes the vector into the next
- Code signing certificates were legitimately obtained, making detection harder
- Build environment credential rotation and isolation are essential
xz-utils / CVE-2024-3094 (2024) — T1195.001
Vector: Long-term social engineering and malicious maintainer. "Jia Tan" spent ~2 years gaining trust in the xz-utils project, eventually embedding a sophisticated backdoor in the build system (not the source repo) that targeted OpenSSH via systemd's dependency on liblzma. The backdoor modified the SSH authentication flow to enable unauthorized access.
Key lessons:
- Build-time-only backdoors evade source code review — the malicious payload was injected via test fixture files processed during
make - Reproducible builds would have detected the discrepancy between source and binary
- Single-maintainer critical infrastructure is a systemic risk
- Discovered by accident (Andres Freund noticed SSH latency) — not by any security tool
2. Attack Taxonomies
2.1 Dependency Confusion
Mechanism: Exploit package manager resolution logic. When an organization uses private packages with names not registered on public registries, an attacker registers the same name on the public registry with a higher version number. Package managers default to the higher version from the public source.
Affected ecosystems: npm, PyPI, RubyGems, NuGet, Maven (with specific configurations)
Mitigations:
- Configure scoped registries (
@company/packagein npm) - Use explicit registry URLs in pip/poetry/uv configuration
- Claim namespace prefixes on public registries
- Implement allow-listing at the registry proxy level (Artifactory, Nexus)
- Pin dependencies with integrity hashes in lock files
2.2 Typosquatting
Mechanism: Register packages with names similar to popular packages (e.g., colourama vs colorama, python-dateutils vs python-dateutil). Users who mistype the package name install the malicious version.
Scale: Thousands of typosquat packages discovered and removed from npm and PyPI annually.
Mitigations:
- Copy-paste package names from official documentation
- Use lock files and review new dependency additions in PRs
- Socket.dev and similar tools detect typosquatting patterns
- Registry-level defenses (npm's similarity detection, PyPI's name normalization)
2.3 Poisoned Pipeline Execution (PPE)
Mechanism: Attacker modifies CI/CD pipeline configuration files (.github/workflows/, .gitlab-ci.yml, Jenkinsfile) via pull requests or compromised branches to execute arbitrary code in the CI environment.
Variants:
- Direct PPE (D-PPE): Modify pipeline config in a branch that triggers CI
- Indirect PPE (I-PPE): Modify files consumed by pipeline steps (Makefiles, test scripts, build configs) without touching pipeline config itself
- Public PPE (3PE): Fork a public repo and submit a PR that triggers CI with modified pipeline
Mitigations:
- Require approval for CI runs on external PRs (
pull_request_targetvspull_requestin GitHub Actions) - Pin actions by SHA, not tag
- Use
permissionsblocks with least privilege in GitHub Actions workflows - Restrict
GITHUB_TOKENpermissions - Use Harden-Runner for egress monitoring
- Never pass secrets to PR-triggered workflows from forks
2.4 Compromised Build Infrastructure
Mechanism: Directly compromise the build system, CI/CD server, or artifact repository to inject malicious code during the build process, post-build, or during distribution.
Examples: SolarWinds (build system), Codecov (distribution script), 3CX (build environment credentials)
Mitigations:
- SLSA L3 build isolation
- Hermetic builds (no network access during build)
- Provenance verification
- Reproducible builds for independent verification
- Separate signing infrastructure from build infrastructure
3. SLSA Framework (v1.0)
SLSA (Supply-chain Levels for Software Artifacts) is "a specification for describing and incrementally improving supply chain security, established by industry consensus." It defines four build track levels with increasing security guarantees.
3.1 SLSA Levels
| Level | Name | Key Requirement | Security Guarantee |
|---|---|---|---|
| L0 | No guarantees | None | Baseline, pre-SLSA state |
| L1 | Provenance exists | Documented build process; provenance describing build platform, process, and top-level inputs | Prevents unintentional release mistakes; enables software inventory and debugging |
| L2 | Hosted build platform | Builds on dedicated infrastructure; digitally signed provenance | Tamper detection post-build; deters adversaries with audit trail; limits attack surface to auditable platforms |
| L3 | Hardened builds | Strong isolation between builds; signing material inaccessible to build steps | Prevents build-time tampering by insiders or compromised credentials; ensures official source and process compliance |
3.2 Requirements by Level
Producer requirements (all levels):
- Select a build platform capable of the desired SLSA level
- Follow a consistent, documented build process
- Distribute provenance to artifact consumers
Build platform — provenance generation:
| Level | Requirement | Detail |
|---|---|---|
| L1 | Exists | Provenance identifies output by cryptographic digest |
| L2 | Authentic | Provenance includes valid digital signatures verifiable by consumers |
| L3 | Unforgeable | All provenance fields generated/verified by control plane; signing secrets inaccessible to user-defined build steps |
Build platform — isolation:
| Level | Requirement | Detail |
|---|---|---|
| L1-L2 | Hosted | Builds run on shared or dedicated infrastructure, not developer workstations |
| L3 | Isolated | Ephemeral, isolated environments preventing inter-build interference, cache poisoning, concurrent build access, and environment persistence |
3.3 Threat Model
SLSA addresses threats across four categories:
Source threats (not addressed in v1.0):
- (A) Submit unauthorized change
- (B) Compromise source repo
- (C) Build from modified source
Build threats (primary SLSA focus):
- (E) Compromise build process — L2 mitigates via trusted control plane; L3 via hardened isolation
- (F) Upload modified package — L1+ mitigates via provenance requirement and hash verification
- (G) Compromise package registry — partially addressed via provenance verification
- (H) Use compromised package — L2+ via signed provenance; L3 via build isolation
Dependency threats:
- (D) Use compromised dependency — out of scope for v1.0; mitigable through recursive SLSA verification
Verification threats:
- Expectation tampering, forged metadata, hash collisions
3.4 Implementation Guide
Achieving L1 (minimal effort):
# GitHub Actions: use slsa-github-generator
# This generates SLSA provenance automatically
- uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
Achieving L2 (moderate effort):
- Migrate builds to hosted CI/CD (GitHub Actions, Cloud Build, etc.)
- Use SLSA provenance generators that produce signed attestations
- Publish provenance alongside artifacts
Achieving L3 (significant effort):
- Use isolated build environments (containerized, ephemeral runners)
- Ensure signing keys are managed by the build platform, not accessible to build steps
- Implement hermetic builds where possible
- Use GitHub Actions reusable workflows with
slsa-github-generator(designed for L3)
4. Sigstore Ecosystem
Sigstore provides keyless signing, verification, and transparency for software artifacts. The three core components:
4.1 Fulcio — Certificate Authority
Purpose: Free certificate authority issuing short-lived (10-minute) code signing certificates bound to OIDC identities.
How keyless signing works:
- Developer authenticates with an OIDC provider (GitHub, Google, Microsoft)
- Fulcio receives the OIDC token and verifies identity
- Short-lived certificate issued, bound to the authenticated identity (email, workflow identity)
- Certificate used to sign the artifact
- Certificate expires after 10 minutes — no key management burden
Technical details:
- ECDSA P-384 signatures
- Two-level or three-level certificate chains (root -> intermediate -> leaf)
- All certificates published to Certificate Transparency log at
ctfe.sigstore.dev - API accessible via HTTP and gRPC (Protocol Buffers)
- Public instance: 99.5% availability SLO
Security properties:
- No long-lived keys to steal or rotate
- Identity tied to existing authentication infrastructure
- Certificate Transparency enables auditing — monitors can detect unauthorized certificate issuance
- Ephemeral keys eliminate the key management problem entirely
4.2 Rekor — Transparency Log
Purpose: Immutable, tamper-resistant ledger recording signed metadata from software supply chains.
Architecture:
- v1: General Availability, maintenance mode (Trillian-based Merkle tree)
- v2: Active development using tile-based log (Trillian-Tessera) for easier operations
What it stores: Signed metadata enabling maintainers and build systems to record attestations. Consumers query for trust decisions and non-repudiation.
Key capabilities:
- RESTful API for entry creation and validation
- Inclusion proof querying (cryptographic proof that an entry exists in the log)
- Transparency log integrity verification
- Entry retrieval by public key, artifact hash, or log index
- 100KB maximum attestation size (public instance)
- Pluggable type system for custom entry formats
Public instance endpoints (99.5% SLO):
/api/v1/log— log state/api/v1/log/publicKey— log signing key/api/v1/log/proof— consistency proofs/api/v1/log/entries— create/retrieve entries/api/v1/log/entries/retrieve— search entries
4.3 Cosign — Signing & Verification Tool
Purpose: Container image and artifact signing using Sigstore infrastructure.
Signing methods:
- Keyless (default): Uses Fulcio + Rekor; identity via OIDC
- Key-pair: Traditional
cosign generate-key-pairflow - KMS-backed: AWS KMS, GCP KMS, Azure Key Vault, HashiCorp Vault
- Hardware tokens: YubiKey, other PIV-compatible devices
- Bring-your-own PKI: Custom certificate chains
Supported artifacts: Container images, blobs, Tekton bundles, WASM modules, eBPF programs, in-toto attestations (stored in OCI registries)
Signing workflow (keyless):
# Sign a container image (opens browser for OIDC auth)
cosign sign $IMAGE_DIGEST
# In CI/CD (uses workload identity, no browser)
cosign sign $IMAGE_DIGEST # OIDC token from ACTIONS_ID_TOKEN_REQUEST_URL
# Verify with identity constraints
cosign verify $IMAGE \
--certificate-identity=user@example.com \
--certificate-oidc-issuer=https://accounts.google.com
Registry compatibility: ECR, GCP Artifact Registry, Docker Hub, ACR, GitLab, GHCR, Harbor, Quay, and others.
Key constraints: Generates only ECDSA-P256/SHA256 for its own keys; verifies other algorithms.
4.4 Sigstore End-to-End Flow
Developer Fulcio Rekor Registry
| | | |
|-- OIDC token ---------->| | |
|<-- ephemeral cert ------| | |
| |-- cert to CT log ----->| |
|-- sign artifact ------->| | |
| | | |
|-- signature + cert ---->|----- log entry ------->| |
| | | |
|-- push signature ------>| | |
| | | [stored]
| | | |
Consumer: |
|<-- pull image + sig --------------------------------------------------|
|-- verify sig against Rekor inclusion proof ------>| |
|-- verify cert identity against Fulcio ---------->| |
|-- VERIFIED |
5. SBOM Formats and Tooling
5.1 SBOM Formats
SPDX (Software Package Data Exchange):
- ISO/IEC 5962:2021 international standard
- Maintained by the Linux Foundation
- Formats: JSON, RDF, YAML, tag-value, XML
- Focus: License compliance + security
- Key fields: package name, version, supplier, download location, checksums, license (SPDX identifiers), relationships
CycloneDX:
- OWASP standard, purpose-built for security
- Formats: JSON, XML, Protocol Buffers
- Focus: Security and risk analysis
- Richer vulnerability, service, and composition modeling
- Supports VEX (Vulnerability Exploitability eXchange) inline
- Key fields: components, dependencies, vulnerabilities, services, compositions, formulation
Comparison:
| Feature | SPDX | CycloneDX |
|---|---|---|
| Standards body | ISO/Linux Foundation | OWASP |
| License focus | Strong | Adequate |
| Security focus | Adequate | Strong |
| VEX support | Separate document | Inline |
| Build provenance | Limited | Formulation object |
| US EO 14028 | Referenced | Referenced |
5.2 Syft — SBOM Generation
Purpose: CLI tool and Go library for generating SBOMs from container images and filesystems.
Capabilities:
- Scans container images (OCI, Docker, Singularity), filesystems, and archives
- Supports dozens of ecosystems: Alpine (apk), Debian (dpkg), RPM, Go, Python, Java, JavaScript, Ruby, Rust, PHP, .NET, and more
- Output formats: CycloneDX (JSON/XML), SPDX (JSON), Syft JSON (native)
- Format conversion between SBOM standards
- Signed SBOM attestations using in-toto specification
Usage:
# Generate SBOM from container image
syft packages alpine:latest -o spdx-json > sbom.spdx.json
# Generate CycloneDX from filesystem
syft packages dir:/app -o cyclonedx-json > sbom.cdx.json
# Attach SBOM as in-toto attestation (with cosign)
cosign attest --predicate sbom.cdx.json --type cyclonedx $IMAGE
Integration with Grype:
# Generate SBOM once, scan repeatedly
syft packages $IMAGE -o json > sbom.json
grype sbom:sbom.json
6. Vulnerability Scanning
6.1 Grype
Purpose: Vulnerability scanner for container images, filesystems, and SBOMs.
Data sources: NVD, OS-specific advisories (Alpine, Debian, Ubuntu, RHEL, Oracle, Amazon Linux), language-specific advisories (RubyGems, npm, PyPI, Maven, Go, Rust, .NET, PHP)
Key features:
- SBOM ingestion — scan a pre-generated Syft SBOM without re-analyzing the artifact
- EPSS scoring for probability-based prioritization
- KEV (Known Exploited Vulnerabilities) identification
- OpenVEX support for filtering false positives
- Multiple output formats (table, JSON, CycloneDX)
Usage:
# Scan container image
grype alpine:latest
# Scan from SBOM
grype sbom:./sbom.spdx.json
# Fail CI if high/critical vulns found
grype $IMAGE --fail-on high
6.2 OSV-Scanner
Purpose: Google's vulnerability scanner using the OSV (Open Source Vulnerabilities) database.
Key differentiators:
- Uses the OSV database — aggregates advisories from GitHub Security Advisories, RustSec, PyPI, Go, and more
- Call analysis: Determines if vulnerable functions are actually invoked, reducing false positives
- Guided remediation (experimental): Suggests upgrade paths based on dependency depth, severity, and ROI
- License scanning using deps.dev data
- Container scanning: Layer-aware analysis (Alpine, Debian, Ubuntu OS packages + language artifacts)
- Offline mode: Download database locally for air-gapped environments
- Vendored C/C++ detection: Identifies embedded C/C++ libraries via commit hashing
Supported ecosystems: C/C++, Dart, Elixir, Go, Java, JavaScript, PHP, Python, R, Ruby, Rust — 11+ ecosystems, 19+ lockfile types
Usage:
# Scan a project directory
osv-scanner -r /path/to/project
# Scan specific lockfile
osv-scanner --lockfile=package-lock.json
# Scan container image
osv-scanner --docker alpine:latest
# Guided remediation
osv-scanner fix --lockfile=package-lock.json
6.3 Scanner Comparison
| Feature | Grype | OSV-Scanner |
|---|---|---|
| Database | NVD + OS + language | OSV.dev aggregate |
| SBOM input | Yes (Syft native) | Yes |
| Call analysis | No | Yes (Go, Rust) |
| Guided remediation | No | Yes (experimental) |
| Container scanning | Yes | Yes (layer-aware) |
| Offline mode | Yes (db download) | Yes |
| License scanning | No | Yes |
| EPSS/KEV | Yes | No |
7. Secret Scanning
7.1 TruffleHog
Purpose: Secret discovery, classification, validation, and analysis across diverse sources.
Four-stage pipeline:
- Discovery: Scans git repos, S3 buckets, GCS, Docker images, filesystems, Jenkins, Elasticsearch, Postman workspaces
- Classification: Recognizes 800+ secret types (AWS, Stripe, Cloudflare, database credentials, SSL keys, etc.)
- Validation: Attempts authentication to confirm if secrets are still active
- Analysis: For ~20 common credential types, enumerates accessible resources, permissions, and creator identity
Usage:
# Scan git repo
trufflehog git https://github.com/org/repo
# Scan GitHub org
trufflehog github --org=myorg
# Scan filesystem
trufflehog filesystem /path/to/code
# Scan S3 bucket
trufflehog s3 --bucket=mybucket
# Scan Docker image
trufflehog docker --image=myimage:latest
# CI/CD: scan only changes in PR branch
trufflehog git file://. --since-commit=HEAD~1 --branch=main --fail
# Only show verified (active) secrets
trufflehog git file://. --only-verified
CI/CD integration:
- Exit codes for pipeline failure on secret detection
- JSON output for automated processing
- Branch-specific scanning for PR workflows
- Installation verification via cosign
Detection priorities:
- Verified: Active credentials confirmed via authentication — immediate remediation
- Unknown: Potential secrets requiring manual review
- Unverified: Pattern matches without confirmation — lowest priority
8. CI/CD Pipeline Security
8.1 GitHub Actions Hardening
Step-Security Harden-Runner
Purpose: EDR for GitHub Actions runners — monitors and controls runner activity in real-time.
Capabilities:
- Network egress monitoring: Tracks outbound connections correlated to specific workflow steps
- File integrity monitoring: Detects unauthorized source code modifications during CI
- Process activity tracking: Captures process names, arguments, execution sequences (enterprise)
Modes:
- Audit (free): Logs all events, creates baselines, alerts on anomalous outbound connections
- Block (enterprise): Enforces domain-based allowlists, prevents unauthorized egress
Integration:
- uses: step-security/harden-runner@v2
with:
egress-policy: audit # or 'block'
allowed-endpoints: >
github.com:443
registry.npmjs.org:443
Real-world detections:
- tj-actions/changed-files compromise (CVE-2025-30066)
- NX Build System malware
- Sha1-Hulud supply chain attack in CNCF Backstage
- Anomalous traffic to api.ipify.org across multiple orgs
Scale: 18M+ weekly workflow runs secured across 11,000+ projects (Microsoft, Google, Kubernetes).
GitHub Actions Security Best Practices
1. Pin actions by SHA, not tag:
# BAD — tag can be moved to point to malicious commit
- uses: actions/checkout@v4
# GOOD — immutable reference
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
2. Restrict GITHUB_TOKEN permissions:
permissions:
contents: read # minimum required
packages: write # only if publishing
# All other permissions default to 'none'
3. Avoid pull_request_target with checkout of PR code:
# DANGEROUS — runs PR code with repo secrets
on: pull_request_target
steps:
- uses: actions/checkout@SHA
with:
ref: ${{ github.event.pull_request.head.sha }} # NEVER DO THIS
# SAFE — use pull_request event (no secrets for fork PRs)
on: pull_request
4. Restrict self-hosted runner exposure:
- Never use self-hosted runners for public repos (any fork PR can execute code on your infrastructure)
- Use ephemeral runners (auto-delete after job completion)
- Network-segment runners away from production
5. Use OpenID Connect for cloud access:
permissions:
id-token: write
contents: read
steps:
- uses: aws-actions/configure-aws-credentials@SHA
with:
role-to-arn: arn:aws:iam::ACCOUNT:role/github-actions
aws-region: us-east-1
6. Audit third-party actions:
- Review action source before use
- Monitor for action compromises (subscribe to GitHub security advisories)
- Consider vendoring critical actions into your own org
- Use Scorecard's
Pinned-Dependenciescheck
9. Container Supply Chain
9.1 Threat Surface
Base Image ──> Build Process ──> Registry ──> Runtime
| | | |
Vuln base Build injection Registry Image
images, via Dockerfile, compromise, tampering,
malicious dependency tag unsigned
layers confusion, mutation, images,
secret leak typosquat privilege
in layers escalation
9.2 Base Image Security
Problem: Most container images inherit hundreds of CVEs from their base image. Full OS images (ubuntu, debian) include packages irrelevant to the workload, expanding the attack surface.
Solution: Distroless / Minimal Base Images
apko — Distroless Image Builder
Purpose: Build OCI container images from APK packages using declarative YAML — no Dockerfile, no RUN commands, no shell.
Key properties:
- Bit-for-bit reproducible: Run apko twice, get identical binaries
- Millisecond build times: Enables rapid rebuild during security patch cycles
- Minimal attack surface: Only application dependencies, no shell, no package manager at runtime
- Automatic SBOM: Every build produces a comprehensive SBOM
Configuration:
# apko.yaml
contents:
repositories:
- https://dl-cdn.alpinelinux.org/alpine/edge/main
packages:
- alpine-base
- python3
entrypoint:
command: /usr/bin/python3
environment:
PATH: /usr/bin
melange — APK Package Builder
Purpose: Build APK packages from source using declarative YAML pipelines. Custom applications must be packaged as APKs before inclusion in apko images.
Key properties:
- Pipeline-oriented architecture with reusable build steps
- Multi-architecture support via QEMU emulation
- Declarative YAML build definitions (package metadata, build environment, pipeline steps, tests)
- Variable substitution:
${{package.name}},${{package.version}},${{targets.destdir}}
Workflow:
Source Code ──> melange (build APK) ──> apko (build OCI image) ──> cosign (sign) ──> Registry
Configuration:
# melange.yaml
package:
name: myapp
version: 1.0.0
epoch: 0
description: My application
pipeline:
- uses: go/build
with:
packages: ./cmd/myapp
output: myapp
- uses: strip
9.3 Container Signing and Verification
Sign at build time:
# Build, push, sign, attach SBOM in one pipeline
docker build -t $REGISTRY/$IMAGE:$TAG .
docker push $REGISTRY/$IMAGE:$TAG
# Sign with cosign (keyless)
cosign sign $REGISTRY/$IMAGE@$DIGEST
# Generate and attach SBOM
syft packages $REGISTRY/$IMAGE@$DIGEST -o cyclonedx-json > sbom.cdx.json
cosign attest --predicate sbom.cdx.json --type cyclonedx $REGISTRY/$IMAGE@$DIGEST
Verify at deploy time (admission controller):
# Kyverno policy example
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signature
spec:
validationFailureAction: Enforce
rules:
- name: check-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "registry.example.com/*"
attestors:
- entries:
- keyless:
issuer: https://token.actions.githubusercontent.com
subject: https://github.com/myorg/*
9.4 Registry Security
- Enable content trust / image signing verification
- Use digest references (
image@sha256:abc...) not tags (tags are mutable) - Implement vulnerability scanning at push time (Harbor, ECR, GCR all support this)
- Enable registry audit logging
- Restrict push access to CI/CD service accounts only
- Consider image promotion pipelines: dev registry -> staging registry -> prod registry
10. Dependency Analysis Platforms
10.1 deps.dev (Open Source Insights)
Purpose: Google-operated service providing dependency graph analysis, security advisory tracking, license information, and version metadata for open source packages.
Capabilities:
- Full transitive dependency graph visualization
- Security advisory aggregation across ecosystems
- License compatibility analysis
- OpenSSF Scorecard integration
- REST API for programmatic access
- Supported ecosystems: npm, PyPI, Go, Maven, Cargo, NuGet, and more
API usage:
# Query package info
curl https://api.deps.dev/v3alpha/systems/npm/packages/express
# Get dependency graph
curl https://api.deps.dev/v3alpha/systems/npm/packages/express/versions/4.18.2:dependencies
10.2 Socket.dev
Purpose: Dependency analysis focused on supply chain attack detection rather than known vulnerability matching.
Approach: Analyzes package behavior (network access, filesystem operations, shell execution, environment variable access) rather than just matching CVEs. Detects:
- Install scripts executing arbitrary code
- Obfuscated code patterns
- Typosquatting and dependency confusion indicators
- Maintainer account takeover signals
- Unexpected network or filesystem access in packages
Risk categories:
- Network access in packages that shouldn't need it
- Shell execution during install
- Environment variable harvesting
- Obfuscated or minified code in source packages
- Protestware patterns
- High-risk maintainer changes
Supported ecosystems: npm, PyPI, Go, Maven, and expanding
Integration: GitHub app for PR review, CLI tool, CI/CD integration
11. OpenSSF Scorecard
Purpose: Automated security assessment of open source projects. Evaluates 17+ security dimensions, each scored 0-10.
Security Checks
| Check | What it evaluates |
|---|---|
| Binary-Artifacts | Compiled binaries in repo |
| Branch-Protection | Code review, protection policies |
| Code-Review | Mandatory review requirements |
| CII-Best-Practices | CII/OpenSSF badge compliance |
| Contributors | Organizational diversity |
| Dangerous-Workflow | Unsafe pull_request_target patterns |
| Dependency-Update-Tool | Automated dependency updates (Dependabot, Renovate) |
| Fuzzing | OSS-Fuzz participation |
| License | License file presence |
| Maintained | Activity in last 90 days |
| Pinned-Dependencies | SHA-pinned dependencies in CI |
| Packaging | Published through standard build pipelines |
| SAST | Static analysis integration |
| Security-Policy | SECURITY.md / vulnerability disclosure process |
| Signed-Releases | Cryptographically signed releases |
| Token-Permissions | Minimal GitHub workflow permissions |
| Vulnerabilities | Known unfixed vulnerabilities |
Usage
# Score a repository
scorecard --repo=github.com/org/repo
# Use with GitHub Action
- uses: ossf/scorecard-action@v2
with:
results_file: results.sarif
publish_results: true # enables badge
# Query via API
curl https://api.scorecard.dev/projects/github.com/org/repo
BigQuery dataset: Weekly scans of 1M+ critical projects available at openssf:scorecardcron.scorecard-v2 (CDLA Permissive 2.0).
12. Reproducible Builds
Definition: A build is reproducible if, given the same source code, build environment, and build instructions, any party can produce bit-for-bit identical output.
Why it matters:
- Detects build-time injection (xz-utils backdoor would have been caught)
- Enables independent verification — anyone can rebuild and compare
- Provides evidence that the distributed binary matches the source
- Required for full SLSA L3+ trust model
Challenges:
- Timestamps embedded in artifacts (fix:
SOURCE_DATE_EPOCH) - Non-deterministic file ordering (fix: sort before archive)
- Build path leaking into binaries (fix: build path remapping)
- Randomized memory layout affecting output
- Locale and timezone differences
Implementation approaches:
- Go: Reproducible by default (mostly) —
CGO_ENABLED=0 go build -trimpath -ldflags='-s -w' - Rust:
CARGO_INCREMENTAL=0 cargo build --release+ strip - apko: Bit-for-bit reproducible by design
- Docker: Use BuildKit with
--build-arg SOURCE_DATE_EPOCH=$(git log -1 --format=%ct)(experimental) - Bazel: Designed for hermetic, reproducible builds
- Nix: Reproducibility is a core design principle
Verification:
# Rebuild and compare hashes
sha256sum artifact-from-maintainer artifact-from-rebuilder
# If hashes match, the build is verified
Projects tracking reproducibility: reproducible-builds.org, Debian reproducible builds project, F-Droid (Android)
13. Binary Authorization
Definition: A deployment-time control that ensures only trusted, verified software artifacts can be deployed to production.
Architecture:
Build ──> Sign Attestation ──> Push to Registry ──> Deploy Request
|
[Admission Controller]
|
Verify attestation against policy
|
Allow/Deny deployment
Implementations:
Google Binary Authorization (GKE):
- Integrates with Cloud Build for automatic attestation
- Policy-based admission control at the cluster level
- Supports multiple attestors (e.g., build, vulnerability scan, QA approval)
- Break-glass mechanism for emergency deployments with audit trail
Kyverno / OPA Gatekeeper (Kubernetes-native):
- Verify cosign signatures and attestations at admission time
- Policy-as-code for image verification rules
- Enforce image sources (only from approved registries)
- Require SBOM and vulnerability scan attestations
Sigstore Policy Controller:
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: require-signed
spec:
images:
- glob: "registry.example.com/**"
authorities:
- keyless:
identities:
- issuer: https://token.actions.githubusercontent.com
subject: https://github.com/myorg/myrepo/.github/workflows/release.yml@refs/heads/main
In-Toto for End-to-End Verification:
in-toto provides a framework for verifying the entire supply chain, not just the final artifact:
- Layout: Project owner defines the expected supply chain steps, authorized functionaries, and artifact rules
- Link metadata: Each step produces signed evidence (
.linkfiles) recording commands executed and files consumed/produced - Verification:
in-toto-verifyvalidates that all steps were performed by authorized personnel, in the correct order, with compliant artifact flow
# Define supply chain layout
in-toto-run --step-name build --products myapp -- make build
in-toto-run --step-name test --materials myapp -- make test
in-toto-run --step-name package --materials myapp --products myapp.tar.gz -- tar czf myapp.tar.gz myapp
# Verify supply chain integrity
in-toto-verify --layout layout.json --layout-keys owner-key.pub
Artifact rules language: CREATE, DELETE, MODIFY, ALLOW, DISALLOW, REQUIRE, MATCH — enables chaining artifact flow between steps with wildcard support.
14. Ecosystem-Specific Risks
14.1 npm / JavaScript
Attack surface:
preinstall/postinstallscripts execute arbitrary code duringnpm installnpm publishdoes not verify identity beyond npm account (no 2FA enforcement until 2022 for top packages)- Package name squatting relatively easy
node_modulesdepth creates massive transitive dependency trees (average: 80+ transitive deps)
Mitigations:
npm install --ignore-scripts+ explicit script allowlistingnpm audit+npm audit signatures(verifies registry signatures)- Use
package-lock.jsonandnpm ci(clean install from lockfile) - Scope packages under org namespace (
@myorg/package) - Socket.dev integration for behavioral analysis
14.2 PyPI / Python
Attack surface:
setup.pyexecutes arbitrary code during install- No namespace scoping — flat namespace enables typosquatting and dependency confusion
- PyPI Trusted Publishers (OIDC-based) only recently adopted
- Wheel format does not execute code at install time, but sdist (source distributions) do
Mitigations:
- Prefer wheels over sdists:
pip install --only-binary :all: - Use
pip install --require-hasheswith pinned dependencies - Use
uvorpip-toolsfor deterministic lock files - Configure index-url explicitly (prevent dependency confusion)
- PyPI Trusted Publishers for package maintainers
14.3 Maven / Java
Attack surface:
- Transitive dependency resolution can pull unexpected versions
- Maven Central does not enforce 2FA or signing key verification (PGP signatures optional for consumers)
- Dependency confusion via internal repository misconfiguration
- Build plugins execute arbitrary code
Mitigations:
- Use
<dependencyManagement>to pin transitive dependency versions - Verify PGP signatures:
mvn org.simplify4u:pgpverify-maven-plugin:check - Configure repository mirrors to prevent resolution from unintended sources
- Use
maven-enforcer-pluginto enforce dependency convergence - Bill of Materials (BOM) POMs for coordinated version management
14.4 Go
Attack surface:
- Module proxy (proxy.golang.org) caches modules immutably — good for availability, but a compromised version is permanently cached
- Vanity import paths can redirect to malicious repositories
replacedirectives ingo.modcan point to arbitrary sources
Mitigations:
go.sumprovides cryptographic verification of module content- Checksum database (sum.golang.org) ensures global consistency
GONOSUMCHECK/GONOSUMDB/GOPRIVATEfor private modulesgovulncheckuses call graph analysis to identify reachable vulnerabilities
15. Implementation Playbook
Phase 1: Foundation (Week 1-2)
-
Inventory dependencies: Generate SBOMs for all projects
syft packages . -o cyclonedx-json > sbom.cdx.json -
Scan for vulnerabilities: Run Grype or OSV-Scanner across all repositories
grype sbom:sbom.cdx.json --fail-on critical -
Scan for secrets: Run TruffleHog against all repositories
trufflehog git file://. --only-verified --fail -
Assess project health: Run OpenSSF Scorecard
scorecard --repo=github.com/myorg/myrepo
Phase 2: CI/CD Hardening (Week 3-4)
- Pin all GitHub Actions by SHA
- Restrict GITHUB_TOKEN permissions to minimum required
- Deploy Harden-Runner in audit mode on all workflows
- Enable Dependabot/Renovate for automated dependency updates
- Add secret scanning to PR pipelines
- Configure dependency lock files with integrity hashes in all projects
Phase 3: Signing and Verification (Week 5-8)
- Implement container image signing with cosign (keyless via GitHub Actions OIDC)
- Attach SBOM attestations to container images
- Deploy admission controller (Kyverno/Sigstore Policy Controller) to enforce signatures
- Achieve SLSA L2: Use
slsa-github-generatorfor provenance generation - Publish provenance alongside release artifacts
Phase 4: Advanced Hardening (Ongoing)
- Move to distroless base images (apko + melange or Chainguard Images)
- Achieve SLSA L3: Isolated builds, unforgeable provenance
- Implement reproducible builds where feasible
- Deploy Socket.dev for behavioral dependency analysis
- Establish dependency review policy: All new dependencies require security review
- Automate SBOM delivery: Publish SBOMs to customers/compliance systems
- Monitor deps.dev / OpenSSF Scorecard for dependency health degradation
Maturity Model
| Maturity | Capabilities | SLSA Level |
|---|---|---|
| Basic | Lock files, vulnerability scanning, secret scanning | L0 |
| Managed | SBOM generation, CI/CD hardening, action pinning, Harden-Runner | L1 |
| Defined | Artifact signing, signed provenance, admission control | L2 |
| Advanced | Isolated builds, reproducible builds, behavioral analysis, distroless images | L3 |
| Optimizing | Full in-toto verification, automated policy enforcement, continuous supply chain monitoring | L3+ |
Quick Reference: Tool Matrix
| Tool | Category | Key Capability |
|---|---|---|
| Syft | SBOM | Generate SBOMs (SPDX, CycloneDX) from images/filesystems |
| Grype | Vulnerability | Scan images/SBOMs against NVD + OS + language advisories |
| OSV-Scanner | Vulnerability | Scan with call analysis, guided remediation, OSV database |
| TruffleHog | Secrets | Discover, classify, and verify 800+ secret types |
| Cosign | Signing | Sign/verify container images and artifacts (keyless via Sigstore) |
| Fulcio | PKI | Short-lived certificates from OIDC identity (no key management) |
| Rekor | Transparency | Immutable log of signing events for non-repudiation |
| in-toto | Integrity | End-to-end supply chain verification with layout enforcement |
| Harden-Runner | CI/CD | EDR for GitHub Actions — network/file/process monitoring |
| Scorecard | Assessment | Automated security posture scoring for open source projects |
| apko | Container | Build distroless OCI images from APK packages (reproducible) |
| melange | Container | Build APK packages from source via declarative YAML pipelines |
| Socket.dev | Analysis | Behavioral dependency analysis — detects supply chain attacks |
| deps.dev | Analysis | Dependency graphs, advisories, license info, Scorecard data |
References
- SLSA Specification: https://slsa.dev/spec/v1.0/
- Sigstore: https://sigstore.dev/
- OpenSSF Scorecard: https://scorecard.dev/
- in-toto: https://in-toto.io/
- Reproducible Builds: https://reproducible-builds.org/
- NIST SP 800-218 (SSDF): https://csrc.nist.gov/publications/detail/sp/800-218/final
- EO 14028 (Improving the Nation's Cybersecurity): https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/
- CISA SBOM Resources: https://www.cisa.gov/sbom
- MITRE ATT&CK Supply Chain Compromise: T1195