BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • Investigation Tools
  • Forensics Artifacts
  • Network Forensics
  • Email Forensics
  • Malware Analysis
  • Timeline Analysis
  • Incident Playbooks
  • Investigation Tools
  • Forensics Artifacts
  • Network Forensics
  • Email Forensics
  • Malware Analysis
  • Timeline Analysis
  • Incident Playbooks
  1. CIPHER
  2. /DFIR
  3. /Network Forensics, Traffic Analysis & C2 Detection

Network Forensics, Traffic Analysis & C2 Detection

Network Forensics, Traffic Analysis & C2 Detection

CIPHER Training Module — Deep Dive Generated: 2026-03-14 Domain: [MODE: BLUE] / [MODE: PURPLE]


Table of Contents

  1. Beacon Detection Methodology
  2. TLS Fingerprinting: JA3 / JA3S / JARM
  3. SSH Fingerprinting: HASSH
  4. DNS Analytics
  5. HTTP Analytics
  6. TLS Certificate Analysis
  7. Zeek Script Writing
  8. Suricata Rule Syntax
  9. Full Packet Capture Architecture
  10. Network IOC Extraction
  11. Tool Reference Matrix
  12. Detection Engineering Playbooks

1. Beacon Detection Methodology

Source: RITA — Real Intelligence Threat Analytics

What Is Beaconing

Beaconing is the periodic callback pattern where a compromised host communicates with C2 infrastructure at regular intervals. It is the heartbeat of nearly every persistent implant — Cobalt Strike, Metasploit Meterpreter, Sliver, Mythic, custom RATs.

Detection Dimensions

Beacon detection operates across three primary axes:

1.1 Frequency Analysis (Interval Consistency)

The core signal. Malware callbacks tend toward fixed intervals (e.g., every 60s, every 300s). Human browsing is stochastic.

Metric: Compute the time deltas between consecutive connections from source to destination. Calculate:

  • Mean interval — average time between callbacks
  • Standard deviation — low StdDev = high regularity = beacon signal
  • Median Absolute Deviation (MAD) — robust against outliers
  • Bowley skewness — measures asymmetry in interval distribution; near-zero skew indicates mechanical regularity
Score_frequency = f(StdDev, MAD, Skewness)
Lower variance + symmetric distribution = higher beacon score

1.2 Jitter Analysis

Sophisticated C2 frameworks add randomized jitter (e.g., Cobalt Strike's sleep 60 20 = 60s base with 20% jitter). This means intervals range from 48-72s.

Detection approach: Even with jitter, the distribution remains tightly clustered compared to human traffic. Analyze:

  • Coefficient of variation (StdDev / Mean) — beacons with 10-30% jitter still show CV < 0.3
  • Interquartile range — narrow IQR relative to the median indicates jitter around a fixed base
  • Distribution shape — jittered beacons produce uniform distributions within a bounded range; human traffic produces exponential/power-law distributions
Jitter-aware scoring:
  If interval_CV < 0.3 AND connection_count > threshold:
    high beacon probability even with jitter

1.3 Data Volume Consistency

C2 check-in packets tend to have consistent sizes (heartbeat payloads are templated). Analyze:

  • Bytes sent per connection — low variance in request/response sizes
  • Upload/download ratio consistency — check-ins show stable ratios
  • Small payload regularity — many beacons send < 500 bytes per callback

Metric: Apply the same statistical tests (StdDev, MAD, skewness) to byte counts per session.

1.4 Connection Count Thresholds

Filter noise by requiring minimum connection counts. A source-destination pair with 3 connections is not a beacon. RITA uses configurable thresholds — typical minimum: 20+ connections over the analysis window.

RITA Scoring Model

RITA processes Zeek connection logs (conn.log) and computes composite beacon scores:

Beacon Score = weighted_combination(
    frequency_score,      # interval regularity
    data_size_score,      # payload consistency
    connection_count,     # volume threshold
    jitter_coefficient    # variance normalization
)

Scores range 0-100. Thresholds:

  • >=90: Almost certainly a beacon — investigate immediately
  • 70-89: High probability — warrants review
  • 50-69: Moderate — correlate with other indicators
  • <50: Low probability — likely benign periodic traffic (NTP, health checks, updates)

RITA Query Syntax

# Search for high-confidence beacons from a specific source
rita search 'src:192.168.88.2 beacon:>=90'

# Filter by severity
rita search 'severity:critical'

# Combined source-destination beacon search
rita search 'src:192.168.88.2 dst:165.227.88.15 beacon:>=90'

# Sort by beacon score descending
rita search --sort beacon:desc

# Export results for external analysis
rita search 'beacon:>=70' --format csv > beacons.csv

Additional RITA Detection Modules

Module Purpose
Long Connections Flags persistent connections (C2 keep-alive, reverse shells)
DNS Tunneling Detects covert channels via DNS query/response abuse
Strobes Identifies port scanning / rapid connection bursts
Threat Intel Cross-references IPs/domains against known-bad feeds

False Positive Management

Common beacon false positives and discrimination:

  • Software update checks — known update servers, filter by destination
  • NTP synchronization — port 123/udp, predictable intervals
  • Health monitoring — Nagios/Zabbix probes to known infrastructure
  • Cloud service heartbeats — Azure/AWS agent callbacks

Mitigation: Maintain a whitelist of known periodic services. Score beacons against threat intel. Investigate unknowns by examining packet payloads and TLS metadata.


2. TLS Fingerprinting: JA3 / JA3S / JARM

2.1 JA3 — Client TLS Fingerprinting

Source: Salesforce JA3

How It Works

JA3 fingerprints a TLS client by hashing five fields from the ClientHello message:

JA3_raw = SSLVersion,CipherSuites,Extensions,EllipticCurves,ECPointFormats

Each field contains decimal values separated by hyphens. Fields are separated by commas.

Example raw string:

769,47-53-5-10-49161-49162-49171-49172-50-56-19-4,0-10-11,23-24-25,0

Hash computation:

JA3 = MD5(JA3_raw)
     = ada70206e40642a3e4461f35503241d5

Field Breakdown

# Field TLS Packet Location Example
1 TLS Version ClientHello version field 769 (TLS 1.0), 771 (TLS 1.2)
2 Cipher Suites Offered cipher suites list 47-53-5-10-49161
3 Extensions Extension type values 0-10-11-13-35
4 Elliptic Curves Supported groups 23-24-25 (secp256r1, secp384r1, secp521r1)
5 EC Point Formats Point format types 0 (uncompressed)

GREASE handling: Google's GREASE (Generate Random Extensions And Sustain Extensibility) values are stripped before hashing to ensure consistency.

Known Malware JA3 Hashes

Malware JA3 Hash
Tor Client e7d705a3286e19ea42f587b344ee6865
Trickbot 6734f37431670b3ab4292b8f60f29984
Emotet 4d7a28d6f2263ed61de88ca66eb011e3
AsyncRAT 72a589da586844d7f0818ce684948eea
Metasploit 05af1f5ca1b87cc9cc9b25185115607d

2.2 JA3S — Server TLS Fingerprinting

JA3S fingerprints the server's ServerHello response using three fields:

JA3S_raw = SSLVersion,Cipher,Extensions
JA3S     = MD5(JA3S_raw)

Key insight: A TLS server responds differently to different clients. The combination of JA3 + JA3S uniquely identifies a client-server cryptographic negotiation. Even when C2 servers rotate IPs or domains, the JA3S fingerprint remains stable because the server software configuration doesn't change.

Example Malware JA3S Hashes:

Malware JA3S Hash
Trickbot C2 623de93db17d313345d7ea481e7443cf
Emotet C2 80b3a14bccc8598a1f3bbe83e71f735f
Cobalt Strike default ae4edc6faf64d08308082ad26be60767

Detection Strategy

IF ja3 IN known_malware_ja3_list:
    ALERT "Known malicious TLS client fingerprint"

IF (ja3, ja3s) pair IN known_c2_negotiation_list:
    ALERT "Known C2 TLS negotiation pattern"

IF ja3 NOT IN baseline_ja3_set AND connection_to_rare_destination:
    ALERT "Unusual TLS client on network"

2.3 JARM — Active Server Fingerprinting

Source: Salesforce JARM

Methodology

Unlike passive JA3/JA3S, JARM is an active scanner that fingerprints TLS servers by sending 10 specially crafted ClientHello probes and analyzing the ServerHello responses.

The 10 Probes

Each probe varies:

  • TLS version offered (1.0, 1.1, 1.2, 1.3)
  • Cipher suite ordering (strongest-first vs. weakest-first)
  • Extension combinations (with/without certain extensions)
  • Version negotiation tests (e.g., TLS 1.3 ClientHello with TLS 1.2 cipher suites)

These probes force the server to reveal its implementation-specific behavior — which cipher it selects, which version it negotiates down to, and which extensions it echoes.

Fingerprint Structure (62 characters)

JARM = [30-char cipher/version responses] + [32-char truncated SHA256 of extensions]
  • First 30 characters: 10 x 3 characters — each group represents the cipher and TLS version selected by the server for each probe. 000 = connection refused or no TLS.
  • Last 32 characters: Truncated SHA256 hash of the cumulative server extensions across all 10 responses (excluding certificates).

C2 Detection with JARM

JARM is highly effective for identifying C2 infrastructure:

C2 Framework JARM Characteristic
Cobalt Strike (default) Unique JARM with zero overlap against Alexa Top 1M
Trickbot Distinct JARM fingerprint, no legitimate overlap
Metasploit Identifiable server-side JARM
Mythic Framework-specific JARM signature

Scanning example:

python3 jarm.py suspicious-domain.com
# Output: 2ad2ad0002ad2ad00042d42d000000ad9bf51cc3f5a1e29eecb81d0c7b06eb

Operational use: Scan newly observed destinations from beacon candidates. If the JARM matches known C2 frameworks, escalate immediately.

JARM Limitations

  • Active scanning — generates traffic to the target
  • Servers behind CDNs/load balancers may share the CDN's JARM
  • Server configuration changes alter the JARM
  • Some C2 operators deliberately modify TLS configurations to evade JARM

3. SSH Fingerprinting: HASSH

Source: Salesforce HASSH

How It Works

HASSH fingerprints SSH clients and servers by hashing algorithm lists from the SSH_MSG_KEXINIT packet exchanged during the SSH handshake.

Client Fingerprint (hassh)

Four algorithm categories concatenated with semicolons:

hasshAlgorithms = KEX_algorithms;Encryption_algorithms;MAC_algorithms;Compression_algorithms
hassh           = MD5(hasshAlgorithms)

Server Fingerprint (hasshServer)

Same four categories from the server's KEXINIT:

hasshServerAlgorithms = KEX_algorithms;Encryption_algorithms;MAC_algorithms;Compression_algorithms
hasshServer           = MD5(hasshServerAlgorithms)

Algorithm Categories

Category Examples
Key Exchange (KEX) curve25519-sha256, diffie-hellman-group14-sha256, ecdh-sha2-nistp256
Encryption aes256-gcm, chacha20-poly1305, aes128-ctr
MAC hmac-sha2-256, hmac-sha2-512, umac-128-etm
Compression none, zlib, zlib@openssh.com

Detection Applications

Scenario Indicator
Metasploit SSH module Distinct hassh from Paramiko library
Empire/PowerShell SSH Unique algorithm ordering
Anomalous lateral movement hassh not matching standard OpenSSH for the OS
Honeypot detection hasshServer reveals non-standard implementations
IoT device identification Embedded SSH implementations have fixed hassh values

Baseline Approach

1. Catalog hassh values for all legitimate SSH clients in environment
2. Alert on any hassh NOT in the known-good baseline
3. Cross-reference anomalous hassh with threat intel databases
4. Monitor for hassh changes on persistent sessions (client software swap)

Zeek Integration

HASSH ships with Zeek scripts that automatically extract and log hassh/hasshServer values for every SSH connection, appending to ssh.log.


4. DNS Analytics

DNS is the most abused protocol for C2 communication because it is rarely blocked and often minimally monitored.

4.1 DGA Detection (Domain Generation Algorithms)

DGA malware generates pseudo-random domain names to locate C2 servers, making blocklist-based detection ineffective.

Linguistic Analysis

DGA domains exhibit statistical properties distinct from human-registered domains:

Feature Legitimate Domain DGA Domain
Character entropy Low (english words) High (random chars)
Consonant-vowel ratio Balanced Skewed
N-gram frequency Matches language model Deviates from language
Length Typically 5-15 chars Often 15-30+ chars
Pronounceability Usually pronounceable Usually not

Example DGA output: qxkjv7nmcfhd2p.net, a8f3kd9x2m.com

Detection Methods

Method 1: Shannon Entropy
  H = -SUM(p(x) * log2(p(x))) for each character
  Threshold: H > 3.5 on second-level domain = suspicious

Method 2: N-gram Analysis
  Compare bigram/trigram frequency against English language corpus
  Deviation beyond 2 StdDev = anomalous

Method 3: NXDomain Volume
  High rate of NXDOMAIN responses from single host = DGA probing
  Threshold: >50 NXDomains/hour from one source

Method 4: Machine Learning
  Random Forest / LSTM on domain character features
  Features: length, entropy, vowel ratio, digit ratio, n-gram scores

Zeek Detection Script

module DGADetection;

export {
    redef enum Notice::Type += { DGADetection::HIGH_NXDOMAIN_RATE };
}

global nxdomain_counter: table[addr] of count &default=0 &create_expire=1hr;

event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count)
{
    if ( ! is_orig && msg$rcode == 3 )  # NXDOMAIN
    {
        nxdomain_counter[c$id$orig_h] += 1;

        if ( nxdomain_counter[c$id$orig_h] > 50 )
        {
            NOTICE([$note=DGADetection::HIGH_NXDOMAIN_RATE,
                    $conn=c,
                    $msg=fmt("Host %s has %d NXDOMAINs in the last hour",
                             c$id$orig_h, nxdomain_counter[c$id$orig_h]),
                    $suppress_for=30min]);
        }
    }
}

4.2 DNS Tunneling Detection

DNS tunneling encodes data in DNS queries (subdomains) and responses (TXT/NULL/CNAME records) to create a covert channel.

Indicators

Indicator Normal DNS DNS Tunnel
Query length < 30 chars > 50 chars, often > 100
Subdomain depth 1-3 levels 4+ levels
Record types A, AAAA, CNAME TXT, NULL, CNAME (large records)
Query volume per domain Low-moderate Very high to single domain
Response size < 512 bytes Often max size
Entropy of subdomain Low (words) High (encoded data)
Unique subdomains Few per parent domain Hundreds/thousands per parent

Tools: iodine, dnscat2, Cobalt Strike DNS

All encode payloads differently:

  • iodine: Base32/Base64/Base128 encoded subdomains, NULL record responses
  • dnscat2: Encrypted, encoded in TXT records
  • Cobalt Strike: Configurable, often uses A/AAAA records with encoded subdomains

Detection Query (RITA)

rita search 'dns_tunnel:>=80'

Suricata Rule for Long DNS Queries

alert dns $HOME_NET any -> any 53 (msg:"CIPHER - Possible DNS Tunnel - Long Query";
    dns.query; content:"."; offset:50;
    threshold:type threshold, track by_src, count 10, seconds 60;
    classtype:bad-unknown; sid:1000001; rev:1;)

4.3 Fast Flux Detection

Fast flux domains rapidly rotate IP addresses (A records) to distribute C2 infrastructure and evade takedowns.

Indicators

  • High TTL churn: Same domain resolves to different IPs within minutes
  • Large answer set: Single query returns 5+ A records
  • Geographically diverse IPs: Resolved IPs span multiple ASNs/countries
  • Low TTL values: TTL < 300 seconds to force frequent re-resolution
  • Double flux: Both A records AND NS records rotate (bulletproof hosting)

Detection Logic

For each domain queried in time window:
  unique_ips = count(distinct A record answers)
  unique_asns = count(distinct ASNs of answers)
  avg_ttl = mean(TTL values)

  IF unique_ips > 5 AND unique_asns > 2 AND avg_ttl < 300:
    FLAG as potential fast flux

5. HTTP Analytics

5.1 User-Agent Profiling

User-Agent strings reveal the software making HTTP requests. C2 frameworks often use default or anomalous User-Agents.

Suspicious Patterns

Pattern Indicator
Empty User-Agent Many simple RATs omit it
Python-urllib, Go-http-client, Java Scripted/automated requests
Outdated browser strings Hardcoded UAs from old implant builds
User-Agent mismatch UA says Chrome but TLS JA3 doesn't match Chrome
Rare User-Agent in environment Only one host uses this UA

Cross-Correlation: JA3 + User-Agent

A powerful detection technique: if the JA3 fingerprint doesn't match the claimed User-Agent, the client is lying about its identity.

IF user_agent CONTAINS "Chrome/1" AND ja3 != known_chrome_ja3_set:
    ALERT "TLS fingerprint inconsistent with claimed User-Agent"

This catches C2 frameworks that set a browser User-Agent but use their own TLS library (e.g., Python requests, Go net/http, .NET WebClient).

5.2 URI Pattern Analysis

C2 HTTP channels exhibit characteristic URI patterns:

Framework Typical URI Pattern
Cobalt Strike (default) /visit.js, /submit.php, /___utm.gif, malleable C2 profiles
Metasploit /random_4char, /INITM
Empire /login/process.php, /admin/get.php
PoshC2 Short alphanumeric paths
Generic RAT Single-level path, query string with encoded payload

Detection Approach

1. Baseline normal URI patterns for the environment
2. Alert on:
   - Repeated identical URIs to same destination (check-in pattern)
   - URIs with high entropy query strings (encoded payloads)
   - POST requests to uncommon paths with small body sizes
   - Consistent URI + consistent interval = beacon over HTTP

5.3 HTTP Beacon Detection (Combined)

Layer HTTP analytics on top of connection-level beacon analysis:

Beacon Score (HTTP) = f(
    interval_regularity,
    uri_consistency,        # same URI repeated
    user_agent_consistency, # same UA every time
    request_size_consistency,
    response_size_consistency,
    ja3_match              # TLS client consistency
)

6. TLS Certificate Analysis

Certificate Properties for C2 Detection

C2 servers often use self-signed or cheaply obtained certificates with telltale properties:

Property Legitimate Suspicious
Issuer Known CA (Let's Encrypt, DigiCert) Self-signed, unknown CA
Validity period 90 days - 2 years Extremely long (10+ years) or very short
Subject CN Matches domain Generic, random, or mismatched
SAN Present, matches Missing or mismatched
Serial number Proper entropy Sequential or default
Key size 2048+ RSA, 256+ EC Non-standard sizes

Cobalt Strike Default Certificate

Cobalt Strike's default HTTPS listener uses a self-signed certificate with well-known default values. Hunt for:

  • Default serial number patterns
  • Default issuer/subject strings
  • JARM fingerprint of the listener

Zeek TLS Log Fields

Zeek's ssl.log captures:

  • version: TLS version negotiated
  • cipher: Cipher suite selected
  • server_name: SNI from ClientHello
  • subject: Certificate subject
  • issuer: Certificate issuer
  • validation_status: Certificate chain validation result
  • ja3: Client JA3 hash
  • ja3s: Server JA3S hash

Detection Script: Self-Signed Certificate Alert

event ssl_established(c: connection)
{
    if ( c$ssl?$validation_status &&
         c$ssl$validation_status == "self signed certificate" )
    {
        if ( c$id$resp_h !in known_internal_tls_servers )
        {
            NOTICE([$note=SSL::Invalid_Server_Cert,
                    $conn=c,
                    $msg=fmt("Self-signed cert to external host %s (SNI: %s)",
                             c$id$resp_h,
                             c$ssl?$server_name ? c$ssl$server_name : "N/A"),
                    $suppress_for=1hr]);
        }
    }
}

7. Zeek Script Writing

Source: Zeek — docs.zeek.org

Architecture

Zeek is an event-driven network analysis framework. It passively monitors traffic and generates structured logs plus fires events that scripts can handle. It is NOT a signature-based IDS — it is a programmable network sensor.

Pipeline: Raw packets -> Protocol analyzers -> Events -> Scripts -> Logs/Notices

Core Data Types

# Network types
local ip: addr = 192.168.1.1;
local net: subnet = 10.0.0.0/8;
local p: port = 443/tcp;

# Time types
local t: time = network_time();
local d: interval = 5min;

# Collections
local ports: set[port] = {80/tcp, 443/tcp, 8080/tcp};
local counts: table[addr] of count = table();
local items: vector of string = vector("a", "b", "c");

# Records (structured data)
type MyRecord: record {
    src: addr;
    dst: addr;
    score: double;
    timestamp: time;
};

Event Handling

# Connection lifecycle events
event new_connection(c: connection) { }
event connection_established(c: connection) { }
event connection_state_remove(c: connection) { }

# Protocol-specific events
event http_request(c: connection, method: string, original_URI: string,
                   unescaped_URI: string, version: string) { }
event http_reply(c: connection, version: string, code: count, reason: string) { }
event http_header(c: connection, is_orig: bool, name: string, value: string) { }

event dns_request(c: connection, msg: dns_msg, query: string, qtype: count,
                  qclass: count) { }

event ssl_established(c: connection) { }

# Access connection record fields
event connection_established(c: connection)
{
    local src = c$id$orig_h;
    local dst = c$id$resp_h;
    local dst_port = c$id$resp_p;
    print fmt("%s -> %s:%s", src, dst, dst_port);
}

The Connection Record

# Key fields available in most handlers via `c: connection`
c$id$orig_h        # source IP
c$id$orig_p        # source port
c$id$resp_h        # destination IP
c$id$resp_p        # destination port
c$start_time       # connection start
c$duration         # connection duration
c$orig$size        # bytes from originator
c$resp$size        # bytes from responder
c$history          # connection state history string
c$service          # detected service set (e.g., {"ssl", "http"})

Custom Logging

module C2Detection;

export {
    # Define a new log stream
    redef enum Log::ID += { LOG };

    type Info: record {
        ts:         time    &log;
        src:        addr    &log;
        dst:        addr    &log;
        dst_port:   port    &log;
        score:      double  &log;
        reason:     string  &log;
    };
}

event zeek_init()
{
    Log::create_stream(C2Detection::LOG, [$columns=Info, $path="c2_detection"]);
}

# Write to the custom log
function log_c2_candidate(c: connection, score: double, reason: string)
{
    Log::write(C2Detection::LOG, [
        $ts=network_time(),
        $src=c$id$orig_h,
        $dst=c$id$resp_h,
        $dst_port=c$id$resp_p,
        $score=score,
        $reason=reason
    ]);
}

Notice Framework

module MyDetections;

export {
    redef enum Notice::Type += {
        MyDetections::C2_BEACON_DETECTED,
        MyDetections::SUSPICIOUS_DNS,
        MyDetections::ANOMALOUS_TLS
    };
}

event some_trigger(c: connection)
{
    NOTICE([$note=MyDetections::C2_BEACON_DETECTED,
            $conn=c,
            $msg="Beacon pattern detected to known bad infrastructure",
            $sub=fmt("JA3: %s, Interval: %.1fs", ja3_hash, avg_interval),
            $identifier=fmt("%s-%s", c$id$orig_h, c$id$resp_h),
            $suppress_for=1hr]);
}

Pattern Matching

# Regex matching
if ( uri ~ /\/admin\/(cmd|shell|exec)/ )
    print "Suspicious admin URI";

# String operations
if ( |domain| > 50 )  # length check
    print "Unusually long domain";

if ( "powershell" in to_lower(user_agent) )
    print "PowerShell User-Agent detected";

Practical Detection: HTTP Beacon Tracker

module HTTPBeacon;

export {
    redef enum Notice::Type += { HTTPBeacon::POTENTIAL_BEACON };
}

# Track connection timestamps per src-dst pair
global conn_times: table[addr, addr] of vector of time &create_expire=2hr;

event http_request(c: connection, method: string, original_URI: string,
                   unescaped_URI: string, version: string)
{
    local src = c$id$orig_h;
    local dst = c$id$resp_h;

    if ( [src, dst] !in conn_times )
        conn_times[src, dst] = vector();

    conn_times[src, dst] += network_time();

    if ( |conn_times[src, dst]| >= 20 )
    {
        # Calculate interval statistics
        local intervals: vector of double = vector();
        local times = conn_times[src, dst];
        local i = 1;
        while ( i < |times| )
        {
            intervals += interval_to_double(times[i] - times[i-1]);
            ++i;
        }

        # Compute mean and standard deviation
        local sum = 0.0;
        for ( idx in intervals )
            sum += intervals[idx];
        local mean = sum / |intervals|;

        local variance = 0.0;
        for ( idx in intervals )
            variance += (intervals[idx] - mean) * (intervals[idx] - mean);
        variance = variance / |intervals|;

        local stddev = sqrt(variance);
        local cv = stddev / mean;  # coefficient of variation

        if ( cv < 0.3 && mean > 5.0 )  # regular interval, not sub-second
        {
            NOTICE([$note=HTTPBeacon::POTENTIAL_BEACON,
                    $msg=fmt("Potential HTTP beacon: %s -> %s (mean=%.1fs, cv=%.3f, n=%d)",
                             src, dst, mean, cv, |intervals|),
                    $identifier=fmt("%s-%s", src, dst),
                    $suppress_for=2hr]);
        }
    }
}

8. Suricata Rule Syntax

Source: Suricata — docs.suricata.io

Rule Structure

action protocol source_ip source_port direction dest_ip dest_port (options;)

Actions

Action Behavior
alert Generate alert, continue processing
pass Stop inspecting this packet
drop Drop packet + alert (IPS mode)
reject Send RST/ICMP unreachable + drop
rejectsrc Reject to source only
rejectdst Reject to destination only
rejectboth Reject to both sides

Direction Operators

Operator Meaning
-> Unidirectional (source to destination)
<> Bidirectional (either direction)
=> Transactional (match across request + response)

Content Matching

# Basic content match
content:"malware";

# Case-insensitive
content:"MaLwArE"; nocase;

# Hex content
content:"|4d 5a 90 00|";  # MZ header

# Position modifiers
content:"GET"; offset:0; depth:3;     # First 3 bytes
content:"/cmd"; distance:0; within:20; # Within 20 bytes of previous match

# Negated content
content:!"legitimate";

# PCRE (Perl Compatible Regular Expressions)
pcre:"/\/[a-z]{4}\.php\?id=[0-9]+/i";

# Fast pattern (tells engine which content to use for prefiltering)
content:"specific_string"; fast_pattern;

Sticky Buffers (Protocol-Specific Inspection)

# HTTP inspection
http.method; content:"POST";
http.uri; content:"/api/beacon";
http.host; content:"evil.com";
http.user_agent; content:"Mozilla"; content:!"Chrome";
http.header; content:"X-Custom-Header";
http.request_body; content:"encoded_payload";
http.stat_code; content:"200";
http.response_body; content:"command";

# DNS inspection
dns.query; content:"tunnel.evil.com";

# TLS inspection
tls.sni; content:"c2.attacker.com";
tls.cert_subject; content:"CN=localhost";

# SSH inspection
ssh.software; content:"paramiko";

Flow Keywords

# Direction and state
flow:established,to_server;   # Established connection, client->server
flow:established,to_client;   # Established connection, server->client
flow:to_server;               # Any state, client->server
flow:stateless;               # No state tracking

# Flow bits (stateful detection across packets)
flowbits:set,seen_login;          # Set a flag
flowbits:isset,seen_login;        # Check if flag set
flowbits:noalert;                 # Don't alert on this rule, just set state

Thresholding and Rate Limiting

# Alert once per source IP per 60 seconds
threshold:type limit, track by_src, count 1, seconds 60;

# Alert only after 10 occurrences per source
threshold:type threshold, track by_src, count 10, seconds 60;

# Alert once then suppress for 5 minutes
threshold:type both, track by_src, count 1, seconds 300;

Practical C2 Detection Rules

Cobalt Strike Default Beacon URI

alert http $HOME_NET any -> $EXTERNAL_NET any (
    msg:"CIPHER - Possible Cobalt Strike Beacon URI";
    flow:established,to_server;
    http.method; content:"GET";
    http.uri; content:"/visit.js";
    http.uri; content:"/__utm.gif";
    threshold:type limit, track by_src, count 1, seconds 3600;
    classtype:trojan-activity;
    sid:2100001; rev:1;
)

DNS Tunneling via Long Queries

alert dns $HOME_NET any -> any 53 (
    msg:"CIPHER - DNS Tunnel Suspected - Excessive Query Length";
    dns.query; content:"."; offset:52;
    threshold:type threshold, track by_src, count 20, seconds 60;
    classtype:bad-unknown;
    sid:2100002; rev:1;
)

Suspicious JA3 Hash (Metasploit)

alert tls $HOME_NET any -> $EXTERNAL_NET any (
    msg:"CIPHER - Known Metasploit JA3 Fingerprint";
    flow:established,to_server;
    ja3.hash; content:"05af1f5ca1b87cc9cc9b25185115607d";
    classtype:trojan-activity;
    sid:2100003; rev:1;
)

HTTP Beacon Pattern (Regular Interval + Small Payload)

alert http $HOME_NET any -> $EXTERNAL_NET any (
    msg:"CIPHER - Repeated HTTP POST to Same URI (Beacon Pattern)";
    flow:established,to_server;
    http.method; content:"POST";
    http.request_body; dsize:<500;
    threshold:type threshold, track by_src, count 20, seconds 3600;
    classtype:trojan-activity;
    sid:2100004; rev:1;
)

Self-Signed Certificate to External Host

alert tls $HOME_NET any -> $EXTERNAL_NET any (
    msg:"CIPHER - TLS to External Host with Self-Signed Certificate";
    flow:established,to_server;
    tls.cert_issuer; content:"CN=";
    tls.cert_subject; content:"CN=";
    # Match where issuer == subject (self-signed indicator)
    classtype:bad-unknown;
    sid:2100005; rev:1;
)

User-Agent Anomaly

alert http $HOME_NET any -> $EXTERNAL_NET any (
    msg:"CIPHER - Python HTTP Client to External Host";
    flow:established,to_server;
    http.user_agent; content:"python-requests";
    threshold:type limit, track by_src, count 1, seconds 3600;
    classtype:bad-unknown;
    sid:2100006; rev:1;
)

9. Full Packet Capture Architecture

9.1 Arkime (formerly Moloch)

Source: Arkime — arkime.com

Architecture

                    +------------------+
   Network Tap --> | Capture Node (C) |---> PCAP files on local disk
                    +------------------+
                           |
                    Session metadata (SPI)
                           |
                           v
                    +------------------+
                    | OpenSearch /     |
                    | Elasticsearch    |
                    +------------------+
                           ^
                           |
                    +------------------+
                    | Viewer Node (JS) |---> Web UI / API
                    +------------------+

Three components:

  1. Capture: Threaded C application. Sniffs packets, writes PCAPs to local disk, parses protocols, sends session metadata to OpenSearch/Elasticsearch.
  2. Viewer: Node.js web application. Provides search interface, session detail views, PCAP download. One per capture node minimum.
  3. OpenSearch/Elasticsearch: Stores and indexes session metadata for fast retrieval. Does NOT store raw packets.

Capacity Planning

  • Capture throughput: Handles tens of Gbps per node with proper hardware
  • Storage: Dual-tier — PCAP retention based on disk capacity, metadata retention based on cluster resources
  • Typical ratio: ~1-5% of raw traffic volume stored as metadata in Elasticsearch

Key Capabilities

  • Session search with rich query language
  • SPI (Session Profile Information) view showing all extracted field values
  • Full PCAP download and reassembly for any session
  • API access for programmatic PCAP retrieval and JSON session data
  • Integration with external tools (Wireshark, CyberChef)
  • Multi-node distributed deployment with central viewer

Deployment Patterns

Pattern Description
Single node One capture + viewer + ES on same box (lab/small office)
Distributed sensors Multiple capture nodes, shared ES cluster, gateway viewer
High availability Redundant capture at network tap, ES cluster with replication

9.2 Integration with Detection Stack

Network Tap (span/tap)
    |
    +--> Suricata (IDS/IPS alerting)
    |
    +--> Zeek (protocol analysis, logs, scripting)
    |
    +--> Arkime (full packet capture + indexing)
    |
    +--> ntopng (flow monitoring, real-time visibility)

Workflow: Suricata or Zeek generates an alert -> analyst pivots to Arkime to pull the full PCAP for that session -> examines raw packets in Wireshark or Arkime viewer.

9.3 ntopng for Flow Monitoring

Source: ntopng

Real-time network traffic monitoring with:

  • nDPI deep packet inspection for application-layer protocol classification
  • sFlow/NetFlow/IPFIX ingestion for switch-based flow data
  • SNMP integration for infrastructure monitoring
  • eBPF support for host-level visibility
  • Alerting on traffic anomalies and threshold breaches

Complements full packet capture by providing real-time dashboards and flow-level analytics without the storage cost of full PCAP.


10. Network IOC Extraction

10.1 BruteShark — PCAP Forensics

Source: BruteShark

Extracts forensic artifacts from PCAP files:

Capability Details
Credential Extraction HTTP Digest, FTP, Telnet, IMAP, SMTP, NTLM (SMB), Kerberos, CRAM-MD5
Hash Extraction Outputs Hashcat-compatible format for offline cracking
Network Mapping Identifies nodes, open ports, domain users; exports to Neo4j
Session Reconstruction Rebuilds complete TCP/UDP conversations
DNS Extraction Recovers all DNS queries and responses
File Carving Extracts JPG, PNG, PDF from TCP/UDP sessions
VoIP Reconstruction Rebuilds SIP/RTP calls as audio files

Hashcat Mode Reference

Protocol Hash Type Hashcat Mode
HTTP Digest HTTP-Digest 11400
SMTP/IMAP CRAM-MD5 16400
SMB NTLMv1 5500
SMB NTLMv2 5600
Kerberos AS-REQ etype 23 7500
Kerberos AS-REP etype 23 18200
Kerberos TGS-REP etype 23 13100
Kerberos TGS-REP etype 17 19600
Kerberos TGS-REP etype 18 19700

10.2 Maltrail — Threat-Intel-Driven Detection

Source: Maltrail

Architecture

Network Interface
      |
  +--------+       +---------+       +--------+
  | Sensor | ----> | Server  | ----> | Client |
  +--------+  UDP  +---------+  HTTP +--------+
  (captures)       (stores/       (web UI
                    correlates)    dashboard)

Detection Methods

Static trails: IP addresses, domains, URLs, and User-Agent strings from aggregated threat intelligence feeds including:

  • AbuseIPDB, AlienVault OTX, Turris greylist
  • ZeusTracker, Ransomware Tracker, VxVault
  • Cobalt Strike, TrickBot botnet trackers
  • OpenPhish, URLhaus
  • Known proxy/VPN/Tor exit lists

Heuristic detection:

  • Excessively long domain names (potential DGA/tunneling)
  • High NXDOMAIN rates (DGA probing)
  • Direct executable downloads over HTTP
  • Suspicious HTTP request patterns

Traffic Processing

The sensor monitors:

  • DNS queries against domain blacklists
  • HTTP traffic matching URL and User-Agent trails
  • TCP/UDP flows against IP reputation lists

Uses tcpdump BPF filters to minimize processing overhead on high-traffic links.

Event Logging

CSV format: timestamp, sensor, src_ip, src_port, dst_ip, dst_port, proto, trail_type, trail, info, reference

Export targets: local file, UDP to server, syslog (CEF), Logstash (JSON).

10.3 C2 Traffic Simulation for Testing

Source: flightsim

Generates realistic C2 traffic patterns to validate detection coverage:

Module Simulates
c2 Beaconing to known C2 addresses (DNS + IP)
dga Domain Generation Algorithm queries
tunnel-dns DNS tunneling traffic
tunnel-icmp ICMP tunneling traffic
scan Port scanning activity
sink Communication with known sinkhole addresses
miner Cryptocurrency mining pool connections
spambot SMTP spam relay behavior
ssh-exfil Data exfiltration over SSH
ssh-transfer Suspicious SSH file transfers
telegram-bot Telegram bot API C2 channel
imposter Domain impersonation (typosquatting)
cleartext Cleartext credential transmission
irc IRC-based C2 communication
oast Out-of-band application security testing
# Run all simulations
flightsim run

# Run specific module
flightsim run c2
flightsim run dga
flightsim run tunnel-dns

# Dry run (show what would happen without generating traffic)
flightsim run --dry

10.4 Malware Traffic Analysis PCAPs

Source: malware-traffic-analysis.net

Archive of real-world malware PCAP captures dating from 2013 to present. Provides:

  • Traffic analysis exercises: Downloadable PCAPs with questions for skill development
  • Malware family coverage: Emotet, Trickbot, QakBot, IcedID, Cobalt Strike, and many more
  • IOC repositories: Published indicators on GitHub
  • Tutorials: Step-by-step analysis walkthroughs

Training approach: Download a PCAP, analyze it with Wireshark/Zeek/Suricata, identify the malware family, extract IOCs, and compare findings against the published analysis.


11. Tool Reference Matrix

Tool Type Primary Use Input Output
Zeek Passive sensor Protocol analysis, scripting Raw packets Structured logs (conn, dns, http, ssl, etc.)
Suricata IDS/IPS Signature-based detection Raw packets EVE JSON alerts
Arkime Full PCAP Packet capture + indexing Raw packets Indexed PCAPs + session metadata
RITA Analyzer Beacon/C2 detection Zeek logs Scored threat reports
ntopng Flow monitor Real-time traffic visibility Packets/flows Dashboards, alerts
Maltrail Threat intel sensor Known-bad traffic detection Raw packets Alert events
BruteShark PCAP forensics Credential/artifact extraction PCAP files Credentials, maps, files
flightsim Red team C2 traffic simulation Config Simulated malicious traffic
JA3/JA3S Fingerprinting TLS client/server identification TLS handshakes MD5 hashes
JARM Fingerprinting Active TLS server scanning Probe responses 62-char fingerprint
HASSH Fingerprinting SSH client/server identification SSH handshakes MD5 hashes
ClamAV AV engine Malware scanning in streams Files/streams Detection verdicts

12. Detection Engineering Playbooks

Playbook 1: Investigate a Beacon Candidate

1. IDENTIFY: RITA flags src->dst pair with beacon score >= 80
2. ENRICH:
   a. Pull JA3/JA3S from Zeek ssl.log for this pair
   b. Check JA3 against known malware hash databases
   c. JARM scan the destination (if authorized)
   d. Check destination IP/domain against threat intel
   e. Review HASSH if SSH-based
3. ANALYZE TRAFFIC:
   a. Pull full PCAP from Arkime for this session
   b. Examine HTTP headers (User-Agent, URIs, cookie patterns)
   c. Verify TLS certificate properties
   d. Check for data volume anomalies (exfiltration indicator)
4. CORRELATE:
   a. Are other internal hosts beaconing to the same destination?
   b. Does the source host show other suspicious behavior?
   c. What process on the host initiated these connections? (EDR pivot)
5. DECIDE:
   a. Confirmed C2 -> Initiate IR (containment, evidence preservation)
   b. Suspicious but inconclusive -> Add to watchlist, increase monitoring
   c. False positive -> Add to RITA whitelist with justification

Playbook 2: DNS Anomaly Investigation

1. IDENTIFY: Alert on high NXDOMAIN rate, long queries, or tunnel indicators
2. CLASSIFY:
   a. DGA: High entropy domains, NXDOMAIN flood, no consistent parent domain
   b. Tunnel: Long subdomains, high query volume to single parent domain, TXT records
   c. Fast flux: Same domain resolving to many IPs, low TTL
3. EXTRACT:
   a. Parent domain(s) from Zeek dns.log
   b. Query frequency and volume statistics
   c. Response types and sizes
   d. Querying host identification
4. VALIDATE:
   a. WHOIS/passive DNS on suspicious domains
   b. Check domain age (recently registered = suspicious)
   c. Cross-reference with threat intel feeds
5. RESPOND:
   a. Confirmed malicious -> Block domain at DNS resolver, isolate host
   b. DGA identified -> Identify malware family from DGA pattern, deploy signatures
   c. Tunnel confirmed -> Block, investigate what data was exfiltrated

Playbook 3: TLS Anomaly Investigation

1. IDENTIFY: Alert on unusual JA3, self-signed cert, certificate anomaly
2. FINGERPRINT:
   a. Record JA3 + JA3S pair
   b. Extract certificate details from ssl.log
   c. JARM scan destination (if external and authorized)
3. CROSS-REFERENCE:
   a. JA3 vs. User-Agent consistency check
   b. JA3 against known malware databases
   c. JARM against known C2 framework signatures
   d. Certificate issuer/subject/serial against threat intel
4. CONTEXT:
   a. Is this the only host using this JA3? (unique client = suspicious)
   b. Is the destination IP in a hosting range? (VPS = C2 likely)
   c. What is the connection pattern? (beacon-like = escalate)
5. RESPOND:
   a. Known C2 fingerprint match -> Immediate IR
   b. Anomalous but unconfirmed -> Monitor, collect more data
   c. Legitimate unusual client -> Document and baseline

Playbook 4: Comprehensive Network Threat Hunt

Week 1: Beacon Hunting
  - Import 7 days of Zeek logs into RITA
  - Review all beacon scores >= 70
  - Cross-reference top candidates with JA3/JARM databases
  - Investigate top 10 candidates through Playbook 1

Week 2: DNS Hunting
  - Analyze DNS query entropy distribution
  - Identify domains with >100 unique subdomains queried
  - Flag all NXDOMAIN sources exceeding baseline by 2x
  - Investigate anomalies through Playbook 2

Week 3: TLS/SSH Hunting
  - Catalog all unique JA3 hashes in environment
  - Identify JA3 values seen on fewer than 3 hosts (rare clients)
  - Review all self-signed certificate connections to external hosts
  - Catalog HASSH values, flag non-standard SSH implementations
  - Investigate anomalies through Playbook 3

Week 4: Lateral Movement Hunting
  - Analyze internal-to-internal traffic patterns
  - Flag unusual SMB/WinRM/SSH/RDP connections
  - Identify new source-destination pairs not seen in prior 30 days
  - Credential extraction from any suspicious internal PCAPs (BruteShark)

Appendix A: Quick Reference — Fingerprint Comparison

Method Direction Protocol Fields Hashed Hash Type Active/Passive
JA3 Client -> Server TLS Version, Ciphers, Extensions, EC, ECFormats MD5 Passive
JA3S Server -> Client TLS Version, Cipher, Extensions MD5 Passive
JARM Probe -> Server TLS 10 probe responses (cipher+version+extensions) Hybrid (raw + SHA256) Active
HASSH Client -> Server SSH KEX, Encryption, MAC, Compression algorithms MD5 Passive
hasshServer Server -> Client SSH KEX, Encryption, MAC, Compression algorithms MD5 Passive

Appendix B: Zeek Log Files for Network Forensics

Log File Contents C2 Detection Use
conn.log All connections (5-tuple, duration, bytes) Beacon analysis, long connections
dns.log DNS queries and responses DGA, tunneling, fast flux
http.log HTTP requests (method, URI, UA, status) C2 URI patterns, UA anomalies
ssl.log TLS handshake metadata (JA3, certs, SNI) TLS fingerprinting, cert analysis
ssh.log SSH handshakes (HASSH, version, auth) SSH anomaly detection
files.log File transfers over any protocol Malware delivery, exfiltration
notice.log Zeek-generated alerts Custom detection output
weird.log Protocol violations and anomalies Evasion attempts, malformed traffic
x509.log Certificate details Certificate property analysis
pe.log Portable Executable metadata Malware binary transfers

Appendix C: Suricata Rule Writing Checklist

[ ] Action appropriate (alert vs drop vs reject)
[ ] Protocol specified correctly (tcp/udp/http/dns/tls/ssh)
[ ] Network variables used ($HOME_NET, $EXTERNAL_NET)
[ ] Flow direction set (to_server, to_client, established)
[ ] Sticky buffers used instead of deprecated content modifiers
[ ] fast_pattern set on the most distinctive content match
[ ] Threshold/rate limiting configured to prevent alert fatigue
[ ] classtype assigned for severity categorization
[ ] Unique SID assigned (check local SID range)
[ ] Revision number set
[ ] Message is descriptive and follows naming convention
[ ] False positive scenarios documented
[ ] ATT&CK tags included where applicable
[ ] Rule tested against sample traffic (replay with suricata -r pcap)
PreviousForensics Artifacts
NextEmail Forensics

On this page

  • Table of Contents
  • 1. Beacon Detection Methodology
  • What Is Beaconing
  • Detection Dimensions
  • RITA Scoring Model
  • RITA Query Syntax
  • Additional RITA Detection Modules
  • False Positive Management
  • 2. TLS Fingerprinting: JA3 / JA3S / JARM
  • 2.1 JA3 — Client TLS Fingerprinting
  • 2.2 JA3S — Server TLS Fingerprinting
  • 2.3 JARM — Active Server Fingerprinting
  • 3. SSH Fingerprinting: HASSH
  • How It Works
  • Detection Applications
  • Baseline Approach
  • Zeek Integration
  • 4. DNS Analytics
  • 4.1 DGA Detection (Domain Generation Algorithms)
  • 4.2 DNS Tunneling Detection
  • 4.3 Fast Flux Detection
  • 5. HTTP Analytics
  • 5.1 User-Agent Profiling
  • 5.2 URI Pattern Analysis
  • 5.3 HTTP Beacon Detection (Combined)
  • 6. TLS Certificate Analysis
  • Certificate Properties for C2 Detection
  • Cobalt Strike Default Certificate
  • Zeek TLS Log Fields
  • Detection Script: Self-Signed Certificate Alert
  • 7. Zeek Script Writing
  • Architecture
  • Core Data Types
  • Event Handling
  • The Connection Record
  • Custom Logging
  • Notice Framework
  • Pattern Matching
  • Practical Detection: HTTP Beacon Tracker
  • 8. Suricata Rule Syntax
  • Rule Structure
  • Content Matching
  • Sticky Buffers (Protocol-Specific Inspection)
  • Flow Keywords
  • Thresholding and Rate Limiting
  • Practical C2 Detection Rules
  • 9. Full Packet Capture Architecture
  • 9.1 Arkime (formerly Moloch)
  • 9.2 Integration with Detection Stack
  • 9.3 ntopng for Flow Monitoring
  • 10. Network IOC Extraction
  • 10.1 BruteShark — PCAP Forensics
  • 10.2 Maltrail — Threat-Intel-Driven Detection
  • 10.3 C2 Traffic Simulation for Testing
  • 10.4 Malware Traffic Analysis PCAPs
  • 11. Tool Reference Matrix
  • 12. Detection Engineering Playbooks
  • Playbook 1: Investigate a Beacon Candidate
  • Playbook 2: DNS Anomaly Investigation
  • Playbook 3: TLS Anomaly Investigation
  • Playbook 4: Comprehensive Network Threat Hunt
  • Appendix A: Quick Reference — Fingerprint Comparison
  • Appendix B: Zeek Log Files for Network Forensics
  • Appendix C: Suricata Rule Writing Checklist