BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • Compliance
  • GRC & Risk
  • Security Metrics
  • Security Leadership
  • Compliance
  • GRC & Risk
  • Security Metrics
  • Security Leadership
  1. CIPHER
  2. /Governance
  3. /GRC, Risk Management & Security Program Leadership — Deep Dive

GRC, Risk Management & Security Program Leadership — Deep Dive

GRC, Risk Management & Security Program Leadership — Deep Dive

CIPHER training module: Governance, Risk, and Compliance (GRC), risk management frameworks, security program management, and executive advisory knowledge.


Table of Contents

  1. Risk Assessment Methodology: Quantitative (FAIR) vs Qualitative
  2. Security Program Maturity Models
  3. Board-Level Reporting
  4. CVSS v4.0 Scoring Guide
  5. Vulnerability Management Lifecycle
  6. Third-Party Risk Management
  7. Security Budget Justification Frameworks
  8. Incident Classification & Escalation Matrices
  9. GRC Framework Reference

1. Risk Assessment Methodology

1.1 Qualitative Risk Assessment

Qualitative methods assign categorical ratings (e.g., Low/Medium/High/Critical) to likelihood and impact. They are fast, accessible to non-technical stakeholders, and sufficient for initial risk triage.

Standard Qualitative Risk Matrix (5x5)

                        IMPACT
               Negligible  Minor  Moderate  Major  Catastrophic
            ┌──────────┬───────┬──────────┬───────┬─────────────┐
  Almost    │  Medium  │ High  │  High    │ Crit  │   Critical  │
  Certain   │          │       │          │       │             │
            ├──────────┼───────┼──────────┼───────┼─────────────┤
  Likely    │   Low    │Medium │  High    │ High  │   Critical  │
            ├──────────┼───────┼──────────┼───────┼─────────────┤
  Possible  │   Low    │ Low   │  Medium  │ High  │    High     │
            ├──────────┼───────┼──────────┼───────┼─────────────┤
  Unlikely  │  Info    │ Low   │   Low    │Medium │    High     │
            ├──────────┼───────┼──────────┼───────┼─────────────┤
  Rare      │  Info    │ Info  │   Low    │ Low   │   Medium    │
            └──────────┴───────┴──────────┴───────┴─────────────┘
LIKELIHOOD

Strengths: Fast, intuitive, good for initial triage and non-technical audiences. Weaknesses: Subjective, inconsistent across raters, cannot support ROI calculations, "risk anchoring" bias (cluster around Medium), no defensible basis for prioritization when everything is "High."

Common Qualitative Pitfalls

  • Ordinal scale abuse: Treating High=3, Medium=2, Low=1 and doing arithmetic on ordinal data (3+2 != 5 in risk terms)
  • Range compression: Assessors avoid extremes, clustering risks around Medium/High
  • Phantom precision: Presenting qualitative outputs with decimal scores creates false confidence
  • Inconsistent calibration: "High likelihood" means different things to different assessors without calibration

1.2 Quantitative Risk Assessment: FAIR Framework

FAIR (Factor Analysis of Information Risk) is the international standard (OpenFAIR, O-RA / O-RT) for quantitative cyber risk analysis. It decomposes risk into measurable factors expressed in financial terms.

FAIR Taxonomy (Decomposition Tree)

                           RISK
                          /    \
                         /      \
            Loss Event          Loss
            Frequency           Magnitude
            /       \            /        \
           /         \          /          \
    Threat Event   Vulnerability   Primary     Secondary
    Frequency      (prob that      Loss        Loss
    /       \       threat          |           /     \
   /         \      succeeds)      |          /       \
Contact     Probability          Direct    Secondary   Secondary
Frequency   of Action            costs     Loss Event  Loss
                                           Frequency   Magnitude

FAIR Core Components

Factor Definition Measurement
Loss Event Frequency (LEF) How often a loss event occurs per timeframe Events/year (range)
Threat Event Frequency (TEF) How often a threat agent acts against an asset Events/year
Vulnerability (Vuln) Probability that a threat event produces a loss 0.0 – 1.0 probability
Contact Frequency (CF) How often threat agent encounters asset Events/year
Probability of Action (PoA) Probability threat agent acts once contact occurs 0.0 – 1.0
Primary Loss Magnitude (PLM) Direct costs from event (response, replacement, lost revenue) Dollar range
Secondary Loss Event Frequency (SLEF) Probability that secondary stakeholders react 0.0 – 1.0
Secondary Loss Magnitude (SLM) Costs from secondary reactions (fines, lawsuits, reputation) Dollar range

FAIR Analysis Process

  1. Identify the scenario: Asset + Threat + Effect (e.g., "External attacker exfiltrates customer PII from production database")
  2. Estimate Loss Event Frequency: Decompose into TEF and Vulnerability
  3. Estimate Loss Magnitude: Decompose into Primary and Secondary losses
  4. Run Monte Carlo simulation: Input ranges as distributions (PERT, lognormal), run 10,000+ iterations
  5. Produce loss exceedance curve: X-axis = loss amount, Y-axis = probability of exceeding that amount
  6. Compare scenarios: Rank risks by annualized loss expectancy (ALE) or Value at Risk (VaR)

Monte Carlo Simulation in FAIR

For each iteration (n = 10,000):
  1. Sample TEF from distribution (e.g., PERT: min=1, mode=5, max=20 events/year)
  2. Sample Vulnerability from beta distribution (e.g., alpha=2, beta=5 → ~29% mean)
  3. LEF = TEF × Vulnerability
  4. Sample Primary Loss from lognormal (e.g., mean=$500K, stdev=$200K)
  5. Sample Secondary Loss probability and magnitude
  6. Total Loss = LEF × (Primary Loss + Secondary Loss probability × Secondary Loss)
  7. Record result

Output: Distribution of annualized loss exposure
  - 10th percentile: $150K (optimistic)
  - 50th percentile: $1.2M (most likely)
  - 90th percentile: $4.5M (pessimistic)
  - 95th percentile: $7.2M (worst reasonable case)

Calibrated Estimation Techniques

Quantitative methods require calibrated subject matter experts. Key techniques:

  • Equivalent bet test: "Would you bet $1000 on this estimate?" — forces honest confidence assessment
  • Reference class forecasting: Compare to known base rates (e.g., Verizon DBIR breach frequency data)
  • Decomposition: Break complex estimates into simpler components (FAIR decomposition tree)
  • Interval estimation: Always express as ranges with confidence intervals, never point estimates
  • Pre-mortem analysis: "Assume this estimate is wrong — what would cause that?"

FAIR vs Qualitative: Decision Guide

Criterion Use Qualitative Use FAIR/Quantitative
Audience Technical risk triage Board, CFO, insurance
Decision type Prioritize remediation backlog Justify $2M security investment
Time available Hours Days to weeks
Data maturity Low — no loss data Some historical data or industry benchmarks
Regulatory need Checkbox compliance Defensible risk decisions (SOX, DORA)
Output needed Risk register heat map Loss exceedance curves, ALE, VaR

FAIR Extensions

  • FAIR-CAM (Controls Analytics Model): Maps controls to FAIR factors — quantifies how much a control reduces LEF or LM
  • FAIR-MAM (Materiality Assessment Model): Determines whether a risk scenario is "material" for disclosure purposes (SEC, DORA)
  • FAIR for AI Risk: Applies FAIR decomposition to AI/ML-specific threat scenarios

1.3 Risk Registers

A risk register is the central artifact of a risk management program. Regardless of qualitative or quantitative approach:

┌──────────┬──────────────┬───────────┬──────────┬──────────┬──────────┬──────────┬──────────┐
│ Risk ID  │ Description  │ Category  │ Like-    │ Impact   │ Risk     │ Owner    │ Treatment│
│          │              │           │ lihood   │          │ Rating   │          │          │
├──────────┼──────────────┼───────────┼──────────┼──────────┼──────────┼──────────┼──────────┤
│ R-001    │ Ransomware   │ Threat    │ Likely   │ Major    │ High     │ CISO     │ Mitigate │
│          │ encryption   │           │ (FAIR:   │ (FAIR:   │ (FAIR:   │          │          │
│          │ of prod DB   │           │ 2-8/yr)  │ $1-5M)   │ $3.2M    │          │          │
│          │              │           │          │          │ ALE)     │          │          │
├──────────┼──────────────┼───────────┼──────────┼──────────┼──────────┼──────────┼──────────┤
│ R-002    │ Third-party  │ Supply    │ Possible │ Moderate │ Medium   │ VP Eng   │ Transfer │
│          │ SaaS breach  │ Chain     │ (1-3/yr) │ ($200K-  │ ($450K   │          │ (cyber   │
│          │ exposing PII │           │          │ $800K)   │ ALE)     │          │ insurance│
└──────────┴──────────────┴───────────┴──────────┴──────────┴──────────┴──────────┴──────────┘

Risk Treatment Options (ISO 31000):

  • Mitigate: Implement controls to reduce likelihood or impact
  • Transfer: Shift risk to third party (insurance, outsourcing)
  • Accept: Formally acknowledge and document residual risk (requires risk owner signature)
  • Avoid: Eliminate the activity that creates the risk
  • Exploit: (For positive risks/opportunities) Increase probability of beneficial outcome

2. Security Program Maturity Models

2.1 CMMI-Based Security Maturity

The Capability Maturity Model Integration adapted for cybersecurity:

Level Name Characteristics Security Indicators
1 Initial Ad hoc, reactive, hero-dependent No formal policies, firefighting mode, undocumented processes
2 Managed Basic processes defined per project Written policies exist, roles assigned, basic asset inventory
3 Defined Organization-wide standards Formal risk assessment, security architecture, training program
4 Quantitatively Managed Metrics-driven decisions KPIs/KRIs tracked, FAIR analysis, SLA-driven response times
5 Optimizing Continuous improvement, innovation Automated compliance, threat-informed defense, red team program

2.2 SOC Maturity Model

For Security Operations Center capability assessment:

Level Detection Response Threat Intel Automation Metrics
1 — Initial Signature-based AV/IDS only Reactive, no playbooks None None None
2 — Defined SIEM with basic correlation Documented playbooks IOC feeds consumed Basic scripting Alert volume tracked
3 — Repeatable Custom detection rules, Sigma Tiered response, SLAs TTP-based hunting SOAR for L1 triage MTTD, MTTR tracked
4 — Managed Behavioral analytics, ML Automated containment CTI team, MITRE mapping Full SOAR integration Detection coverage %
5 — Optimizing Detection-as-code, CI/CD Proactive threat hunting Intel-driven red team AI-assisted analysis ATT&CK coverage heat map

SOC Maturity Assessment Criteria

PEOPLE
  ├── Analyst skill levels and certifications
  ├── Staffing model (8x5 vs 24x7 vs MSSP hybrid)
  ├── Training budget and cadence
  └── Analyst burnout and retention metrics

PROCESS
  ├── Playbook coverage (% of alert types with documented response)
  ├── Escalation procedures
  ├── Shift handoff quality
  └── Tabletop exercise frequency

TECHNOLOGY
  ├── Detection tool coverage (endpoint, network, cloud, identity)
  ├── Log source completeness
  ├── SOAR integration depth
  └── Tooling consolidation vs sprawl

METRICS
  ├── MTTD (Mean Time to Detect)
  ├── MTTR (Mean Time to Respond)
  ├── False positive rate
  ├── Detection coverage vs ATT&CK
  └── Analyst cases per shift

2.3 NIST CSF Implementation Tiers

NIST CSF 2.0 defines four implementation tiers (not maturity levels — they describe risk management rigor):

Tier Name Risk Management Integration External Participation
1 Partial Ad hoc, reactive Limited awareness Irregular collaboration
2 Risk Informed Approved but not org-wide Some risk awareness Informal sharing
3 Repeatable Formal policy, regularly updated Org-wide understanding Active collaboration
4 Adaptive Continuously improved, lessons learned Risk-informed culture Contributes to ecosystem

2.4 C2M2 (Cybersecurity Capability Maturity Model)

DOE-developed model, strong in critical infrastructure:

  • MIL0: Not performed
  • MIL1: Initiated (ad hoc)
  • MIL2: Performed (documented, resourced)
  • MIL3: Managed (policies, accountability, metrics)

Covers 10 domains: Asset Management, Threat and Vulnerability Management, Risk Management, Identity and Access Management, Situational Awareness, Event and Incident Response, Supply Chain and External Dependencies, Workforce Management, Cybersecurity Architecture, Cybersecurity Program Management.


3. Board-Level Reporting

3.1 Principles of Board Communication

Boards need to understand risk in business terms, not technical jargon.

Critical Rules:

  1. Lead with business impact, not technical findings
  2. Use financial language: ALE, VaR, loss exposure — not CVSS scores
  3. Compare to peers: Industry benchmarks contextualize your posture
  4. Show trend lines: Direction matters more than absolute numbers
  5. Present decisions, not data: "We need $X to reduce exposure from $Y to $Z"
  6. Maximum 5 slides for quarterly update; detailed appendix available on request

3.2 Board Reporting Framework

Quarterly Security Report Structure

SLIDE 1: Risk Posture Summary
  ├── Top 5 risks with ALE and trend arrows (↑↓→)
  ├── Risk appetite threshold line on chart
  └── Material changes since last quarter

SLIDE 2: Incident & Threat Landscape
  ├── Incident count by severity (trend over 4 quarters)
  ├── Notable incidents (1-2 sentences each, business impact)
  ├── Industry threat landscape changes (ransomware trends, regulatory)
  └── Near-miss events worth noting

SLIDE 3: Program Performance Metrics
  ├── MTTD / MTTR trends
  ├── Vulnerability remediation SLA compliance
  ├── Phishing simulation results
  ├── Audit finding closure rate
  └── Maturity model score vs target vs industry

SLIDE 4: Compliance & Regulatory Status
  ├── Audit timeline and findings status
  ├── Regulatory changes requiring action
  ├── Certification status (SOC 2, ISO 27001, PCI DSS)
  └── Third-party risk exposure changes

SLIDE 5: Investment & Resource Requests
  ├── Current spend vs budget
  ├── Proposed investments with ROI/risk reduction justification
  ├── Resource gaps (headcount, tooling)
  └── Decision items requiring board approval

3.3 Key Risk Indicators (KRIs) for the Board

KRI Target Red Threshold Measurement
Mean Time to Detect (MTTD) < 24 hours > 72 hours Median across all P1/P2 incidents
Mean Time to Respond (MTTR) < 4 hours > 24 hours Time from detection to containment
Critical vuln remediation < 7 days > 30 days P95 of critical CVE patch time
Phishing click rate < 3% > 10% Monthly simulation campaigns
Third-party risk assessments 100% of critical vendors < 80% Annual assessment completion
Backup recovery test success > 99% < 95% Quarterly recovery drills
Security training completion > 95% < 80% Annual training + quarterly refreshers
Overdue audit findings 0 critical, < 5 high Any critical overdue Monthly tracking
Cyber insurance coverage ratio > 80% of estimated max loss < 50% Annual review

3.4 Risk Appetite Statement

A formal risk appetite statement is essential for board governance:

[ORGANIZATION] RISK APPETITE STATEMENT

The Board accepts the following risk tolerance levels:

STRATEGIC RISKS
  - We accept risks associated with [market expansion] where potential
    upside exceeds estimated downside by 3:1 or greater.

CYBERSECURITY RISKS
  - We do NOT accept risks that could result in:
    - Loss of > 10,000 customer records (regulatory notification threshold)
    - Operational downtime exceeding 4 hours for critical systems
    - Financial loss exceeding $5M from a single cyber event
  - We accept residual risk where:
    - Annualized loss expectancy is below $500K after controls
    - Risk has been formally assessed using quantitative methods
    - Risk owner has signed acceptance with compensating controls documented

COMPLIANCE RISKS
  - Zero tolerance for known regulatory non-compliance
  - Accepted risk of regulatory change impact with 90-day remediation SLA

THIRD-PARTY RISKS
  - Critical vendors must maintain SOC 2 Type II or equivalent
  - Maximum acceptable concentration: no single vendor for > 30% of
    critical infrastructure

Review: Annually or upon material change in threat landscape.
Approved: [Board Chair], [Date]

4. CVSS v4.0 Scoring Guide

4.1 Metric Groups Overview

CVSS v4.0 comprises four metric groups:

Group Purpose Affects Score Who Sets
Base Intrinsic vulnerability characteristics Yes Vendor/CNA
Threat Current exploit landscape Yes Consumer/Intel team
Environmental Organization-specific context Yes Consumer/Risk team
Supplemental Additional context No Vendor or Consumer

4.2 Base Metrics (All Mandatory)

Exploitability Metrics

Metric Values Description
Attack Vector (AV) Network (N), Adjacent (A), Local (L), Physical (P) How the vulnerability is exploited. Network is worst (remotely exploitable).
Attack Complexity (AC) Low (L), High (H) Conditions beyond attacker control. Low = no special conditions. High = must evade active security measures.
Attack Requirements (AT) None (N), Present (P) Deployment/execution conditions. None = works against all instances. Present = requires race condition, MITM, etc.
Privileges Required (PR) None (N), Low (L), High (H) Authentication level needed.
User Interaction (UI) None (N), Passive (P), Active (A) None = no user needed. Passive = involuntary (opening email). Active = user must click/install.

Impact Metrics (Vulnerable System + Subsequent System)

Metric Values Scope
VC / SC (Confidentiality) High (H), Low (L), None (N) V = vulnerable system, S = subsequent/downstream systems
VI / SI (Integrity) High (H), Low (L), None (N) High = total loss of integrity
VA / SA (Availability) High (H), Low (L), None (N) High = complete denial of service

Key change from v3.1: The Scope metric is replaced by separate Vulnerable System (V*) and Subsequent System (S*) impact metrics, providing clearer modeling of cross-boundary impact.

4.3 Threat Metrics

Metric Values Description
Exploit Maturity (E) Not Defined (X), Attacked (A), POC (P), Unreported (U) Attacked = active exploitation in the wild. POC = public proof-of-concept exists. Unreported = no known exploits.

Operational guidance: Default "Not Defined" is treated as worst-case (equivalent to Attacked). Organizations should actively set this based on threat intelligence to reduce scores for vulnerabilities with no known exploitation.

4.4 Environmental Metrics

Metric Values Purpose
CR, IR, AR (Security Requirements) High (H), Medium (M), Low (L), Not Defined (X) Weight CIA impact based on asset criticality. A PII database gets CR:H.
Modified Base Metrics Override any Base metric value Customize scoring for local deployment. E.g., set MAV:L if network vector is mitigated by network segmentation.
MSI, MSA Safety value Safety (S) Special value for when human safety is at risk (IEC 61508). Elevates score.

4.5 Supplemental Metrics (Do Not Affect Score)

Metric Values Use Case
Safety (S) Present (P), Negligible (N) Human safety impact (ICS/OT, medical devices)
Automatable (AU) Yes (Y), No (N) Can exploitation be fully automated? Affects wormability assessment.
Provider Urgency (U) Red, Amber, Green, Clear Vendor-recommended urgency
Recovery (R) Automatic (A), User (U), Irrecoverable (I) Can system self-recover?
Value Density (V) Concentrated (C), Diffuse (D) Rich target vs limited resources per target
Vulnerability Response Effort (RE) Low (L), Moderate (M), High (H) Effort to remediate

4.6 Nomenclature

Label Meaning When to Use
CVSS-B Base score only Initial vendor advisory
CVSS-BT Base + Threat After threat intel assessment
CVSS-BE Base + Environmental After asset context applied
CVSS-BTE Base + Threat + Environmental Full contextual risk score (gold standard)

4.7 Severity Rating Scale

Score Range Severity
0.0 None
0.1 – 3.9 Low
4.0 – 6.9 Medium
7.0 – 8.9 High
9.0 – 10.0 Critical

4.8 Vector String Format

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N
         ^Base metrics (all 11 required)

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N/E:A
         ^Base                                                     ^Threat

Full example with Environmental:
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N/E:A/CR:H/IR:M/AR:L

4.9 Key Differences: CVSS v4.0 vs v3.1

Change v3.1 v4.0
Scope Single Scope metric (Changed/Unchanged) Eliminated — replaced by V*/S* impact split
User Interaction None/Required None/Passive/Active (finer granularity)
Attack Requirements N/A New metric for deployment-specific conditions
Threat Metrics Temporal group (E, RL, RC) Simplified to Exploit Maturity (E) only
Supplemental Metrics N/A New group (Safety, Automatable, Recovery, etc.)
Scoring Method Formula-based MacroVector-based interpolation
OT/Safety Not addressed Safety value in Environmental + Supplemental

5. Vulnerability Management Lifecycle

5.1 Lifecycle Phases

 ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐
 │ DISCOVER │──>│ ASSESS   │──>│PRIORITIZE│──>│REMEDIATE │──>│ VERIFY   │──>│ REPORT   │
 │          │   │          │   │          │   │          │   │          │   │          │
 │ Scan     │   │ CVSS-BTE │   │ Risk-    │   │ Patch    │   │ Rescan   │   │ Metrics  │
 │ Inventory│   │ Context  │   │ based    │   │ Mitigate │   │ Validate │   │ Trending │
 │ Pen test │   │ Threat   │   │ ranking  │   │ Accept   │   │ Pen test │   │ Comply   │
 └──────────┘   │ intel    │   │          │   │ Transfer │   │          │   │          │
                └──────────┘   └──────────┘   └──────────┘   └──────────┘   └──────────┘
                                                   ↑                              │
                                                   └──────────────────────────────┘
                                                        Continuous cycle

5.2 Discovery

Method Coverage Frequency Blind Spots
Network vulnerability scan Known hosts, known CVEs Weekly-monthly Shadow IT, 0-days, logic flaws
Authenticated scan Deep OS/app config Weekly Agent deployment gaps
Agent-based scan Continuous, including offline Real-time Unmanaged assets
Container image scan CI/CD pipeline, registries Every build Runtime-only vulns
DAST/IAST Web application layer Sprint cycle API-only endpoints
SAST Source code vulnerabilities Every commit Runtime config issues
SCA Open source dependencies Every build Vendored/forked code
Penetration testing Business logic, chained attacks Quarterly-annually Coverage depth
Bug bounty Continuous external perspective Continuous Scope-limited

5.3 Risk-Based Prioritization

CVSS alone is insufficient. Effective prioritization combines:

EFFECTIVE PRIORITY = f(CVSS-BTE, Exploit Status, Asset Criticality,
                       Business Context, Threat Intelligence, Exposure)

Prioritization Factors:
  1. CVSS-BTE score (contextualized, not just Base)
  2. EPSS score (Exploit Prediction Scoring System — probability of exploitation in 30 days)
  3. CISA KEV catalog membership (Known Exploited Vulnerabilities — mandatory patch per BOD 22-01)
  4. Asset criticality tier (Crown jewels vs commodity)
  5. Network exposure (internet-facing vs internal-only vs air-gapped)
  6. Compensating controls (WAF, segmentation, EDR coverage)
  7. Data sensitivity (PII, PHI, financial, classified)

Recommended SLA Framework

Priority Criteria Remediation SLA Escalation
P1 — Critical CVSS-BTE >= 9.0 AND (CISA KEV OR active exploitation) AND internet-facing 24-72 hours CISO + VP Eng immediately
P2 — High CVSS-BTE 7.0-8.9 OR any actively exploited vuln 7 days Director of Security
P3 — Medium CVSS-BTE 4.0-6.9 on business-critical systems 30 days Security team lead
P4 — Low CVSS-BTE 0.1-3.9 OR mitigated by compensating controls 90 days Standard sprint planning
P5 — Accepted Risk accepted with documented justification Review annually Risk owner

5.4 Vulnerability Disclosure

Coordinated Disclosure Process

RESEARCHER                          VENDOR                           PUBLIC
    │                                  │                                │
    ├──[Report vulnerability]────────>│                                │
    │                                  ├──[Acknowledge receipt, ≤5 days]│
    │                                  ├──[Triage & validate]          │
    │<──[Confirm vuln, share timeline]─┤                                │
    │                                  ├──[Develop fix]                │
    │<──[Request retest]───────────────┤                                │
    ├──[Verify fix]──────────────────>│                                │
    │                                  ├──[Release patch]──────────────>│
    │                                  ├──[Publish advisory + CVE]─────>│
    ├──[Publish writeup with credit]───────────────────────────────────>│
    │                                  │                                │
    │  Standard timeline: 90 days      │                                │
    │  (Google P0 standard)            │                                │

Vulnerability Advisory Requirements

  • High-level summary including business impact
  • Affected and patched versions
  • Configuration-specific caveats
  • Workarounds and mitigations
  • CVE identifier
  • Optional: disclosure timeline, researcher credit, technical details, IDS/IPS signatures

5.5 CVE Program Integration

CVE (Common Vulnerabilities and Exposures) provides standardized identification:

  • CVE ID format: CVE-YYYY-NNNNN (year + sequential number)
  • CNAs (CVE Numbering Authorities): Organizations authorized to assign CVE IDs (vendors, CERTs, bug bounty platforms)
  • CNA-LR (Last Resort): MITRE serves as CNA of last resort when no other CNA covers the product
  • CVE Record: Structured data including description, affected products, references, and CVSS scores

Integration with VM programs: CVE IDs serve as the canonical identifier linking scanner findings, threat intelligence, vendor advisories, and remediation tracking across all tools.

5.6 Vulnerability Management Metrics

Metric Formula Target Board-Level?
Mean Time to Remediate (MTTR) avg(remediation_date - discovery_date) by severity P1: <3d, P2: <7d Yes
SLA Compliance Rate vulns_within_SLA / total_vulns × 100 > 95% Yes
Vulnerability Density open_vulns / total_assets Decreasing trend No
Scan Coverage scanned_assets / total_assets × 100 > 98% Yes
Recurrence Rate reintroduced_vulns / total_remediated × 100 < 5% No
EPSS-weighted Backlog Sum of EPSS scores for open vulns Decreasing No
KEV Compliance CISA KEV vulns patched within deadline / total KEV vulns 100% Yes
Exception/Acceptance Rate accepted_risks / total_findings × 100 < 10% Yes

6. Third-Party Risk Management

6.1 TPRM Lifecycle

┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐
│ IDENTIFY │──>│ ASSESS   │──>│ CONTRACT │──>│ MONITOR  │──>│ OFFBOARD │
│          │   │          │   │          │   │          │   │          │
│ Inventory│   │ Risk     │   │ Security │   │ Continuous│  │ Data     │
│ Classify │   │ question-│   │ addendum │   │ monitoring│  │ return/  │
│ Tier     │   │ naire    │   │ SLAs     │   │ Reassess │   │ destroy  │
│          │   │ SOC 2    │   │ Right to │   │ Breach   │   │ Access   │
│          │   │ Pen test │   │ audit    │   │ notif.   │   │ revoke   │
└──────────┘   └──────────┘   └──────────┘   └──────────┘   └──────────┘

6.2 Vendor Tiering

Tier Criteria Assessment Depth Reassessment
Critical Processes/stores sensitive data, single point of failure, regulatory scope Full assessment: SOC 2 Type II, pen test results, on-site audit, FAIR analysis Annually + continuous monitoring
High Access to internal systems, moderate data exposure SOC 2 Type II or ISO 27001 cert, security questionnaire, SCA review Annually
Medium Limited data access, non-critical service Security questionnaire (SIG Lite), SOC 2 Type I acceptable Every 2 years
Low No data access, commodity service, easily replaceable Self-attestation, public security page review Every 3 years

6.3 Assessment Components

Documentation Review:

  • SOC 2 Type II report (review scope, exceptions, complementary user entity controls)
  • ISO 27001 certificate (check scope covers relevant services)
  • Penetration test executive summary (< 12 months old)
  • Business continuity and disaster recovery plans
  • Incident response plan and breach notification procedures
  • Privacy policy and data processing agreement (DPA)

Security Questionnaire (SIG/SIG Lite/CAIQ):

  • 18 risk domains in SIG (Shared Assessments Standardized Information Gathering)
  • CAIQ (Consensus Assessment Initiative Questionnaire) for cloud providers
  • Custom questions for organization-specific requirements

Technical Assessment (for Critical/High tiers):

  • Network architecture review
  • Encryption standards (at rest, in transit, key management)
  • Identity and access management practices
  • Vulnerability management program maturity
  • Logging and monitoring capabilities
  • Data residency and sovereignty

6.4 Contractual Security Requirements

Essential security clauses for vendor contracts:

SECURITY ADDENDUM CHECKLIST

□ Data classification and handling requirements
□ Encryption standards (minimum TLS 1.2, AES-256 at rest)
□ Access control requirements (MFA, least privilege, quarterly reviews)
□ Background check requirements for personnel with data access
□ Incident notification timeline (≤ 24 hours for confirmed breaches)
□ Right to audit (annual, with 30-day notice)
□ Penetration testing requirements (annual, by qualified third party)
□ Subprocessor notification and approval requirements
□ Data return/destruction upon termination (certified destruction)
□ Insurance minimums (cyber liability, E&O)
□ Compliance with applicable regulations (GDPR, CCPA, HIPAA BAA)
□ Business continuity requirements (RTO/RPO SLAs)
□ Termination for cause on material security breach
□ Indemnification for security failures
□ Limitation of liability carve-out for security/privacy breaches

6.5 Supply Chain Security for Software

SCA (Software Composition Analysis):

  • Track all open-source dependencies (direct and transitive)
  • Monitor for known CVEs via SCA tools in CI/CD
  • SBOM (Software Bill of Materials) generation in SPDX or CycloneDX format
  • License compliance verification

Third-Party JavaScript Controls (OWASP guidance):

  • Subresource Integrity (SRI): Cryptographic hash verification of external scripts (integrity="sha384-...")
  • Content Security Policy (CSP): Restrict script sources to approved domains
  • Data layer isolation: Restrict vendor tag access to host-controlled data objects, not raw DOM
  • iframe sandboxing: Isolate vendor scripts in sandboxed iframes with minimal permissions
  • RetireJS: Scan for known-vulnerable JavaScript libraries
  • MarSecOps: Coordinate marketing, security, and operations teams for tag management

6.6 Concentration Risk

Monitor and limit vendor concentration:

  • No single vendor for > 30% of critical infrastructure
  • Geographic diversity for critical services
  • Multi-cloud or cloud-exit strategy documented
  • Escrow arrangements for critical proprietary software
  • Regular testing of vendor replacement procedures

7. Security Budget Justification Frameworks

7.1 Risk-Based ROI Model

The most defensible approach ties security investment to quantifiable risk reduction:

SECURITY INVESTMENT ROI CALCULATION

Without Control:
  Annual Loss Expectancy (ALE_before) = LEF × LM = $3.2M

With Proposed Control ($400K annual cost):
  ALE_after = reduced LEF × reduced LM = $800K

Risk Reduction = ALE_before - ALE_after = $2.4M
ROI = (Risk Reduction - Control Cost) / Control Cost
ROI = ($2.4M - $400K) / $400K = 500%

Payback Period = Control Cost / Risk Reduction
Payback Period = $400K / $2.4M = 2 months

7.2 Security Budget Benchmarking

Industry Security Spend as % of IT Budget Source
Financial Services 10-14% Deloitte, Gartner
Healthcare 6-10% HIMSS
Technology 8-12% Gartner
Retail 5-8% NRF
Government 8-12% Federal CISO surveys
Cross-industry median ~6-8% Gartner 2024

Per-employee benchmarks: $2,500-$5,000/employee/year for mature programs.

7.3 Budget Justification Approaches

Approach 1: Loss Avoidance (FAIR-Based)

  • Quantify top 10 risk scenarios using FAIR
  • Show ALE reduction from proposed controls
  • Present loss exceedance curves: "With this investment, we reduce the probability of a >$10M loss from 15% to 2%"
  • Compare to cyber insurance premiums (if control costs < premium reduction, it pays for itself)

Approach 2: Compliance-Driven

  • Map regulatory requirements to gaps
  • Calculate cost of non-compliance (fines, operational restrictions, loss of business)
  • Present as mandatory investment, not optional
  • Example: GDPR fines up to 4% of global annual turnover; DORA fines for financial institutions; SEC reporting requirements

Approach 3: Peer Benchmarking

  • Compare spending levels to industry peers (Gartner, ISC2, SANS surveys)
  • Identify specific gaps vs peer capability
  • Frame as competitive disadvantage: "Our competitors have 24x7 SOC; we do not"

Approach 4: Insurance Alignment

  • Work with cyber insurance broker to identify premium reduction opportunities
  • Demonstrate control improvements that reduce premiums
  • Frame security investment as insurance cost optimization
  • Show coverage gaps that leave the organization self-insured for specific scenarios

7.4 Budget Allocation Model

Recommended allocation for a mature security program:

SECURITY BUDGET ALLOCATION

People (40-50%)
  ├── Security team salaries and benefits     35%
  ├── Training and certifications              3%
  └── Contractors and consultants              7%

Technology (30-35%)
  ├── Endpoint protection (EDR/XDR)            8%
  ├── Network security (FW, IDS/IPS, NDR)      5%
  ├── Identity and access management            5%
  ├── SIEM/SOAR                                 5%
  ├── Vulnerability management                  3%
  ├── Cloud security (CSPM, CWPP)               3%
  ├── Application security (SAST, DAST, SCA)    3%
  └── Other tools                               3%

Services (15-20%)
  ├── Penetration testing                       3%
  ├── Managed security services (MSSP/MDR)      5%
  ├── Incident response retainer                2%
  ├── Threat intelligence feeds                 2%
  ├── Audit and compliance                      3%
  └── Cyber insurance                           3%

Program (5-10%)
  ├── Security awareness training               2%
  ├── Tabletop exercises                        1%
  ├── GRC platform                              2%
  └── Contingency/emerging threats              3%

8. Incident Classification and Escalation Matrices

8.1 Incident Severity Classification

Severity Name Definition Examples
SEV-1 Critical Active compromise, data exfiltration, or service destruction affecting production. Business-critical systems down. Active ransomware encryption, confirmed data breach of regulated data, complete loss of production, nation-state intrusion
SEV-2 High Confirmed compromise with contained scope, significant security control failure, or major service degradation. Compromised admin credentials, malware outbreak on multiple endpoints, partial production impact, failed DR test
SEV-3 Medium Potential compromise requiring investigation, policy violation with security implications, or minor service impact. Suspicious C2 beaconing, unauthorized access attempt, single malware detection, phishing campaign targeting executives
SEV-4 Low Security event requiring tracking, minor policy violation, or informational finding. Failed login brute force (blocked), low-severity vulnerability on non-critical system, lost unencrypted USB
SEV-5 Informational Normal security events, FYI notifications, or process triggers. Security tool update, scheduled scan results, policy acknowledgment tracking

8.2 Escalation Matrix

SEVERITY   INITIAL RESPONSE    ESCALATION CHAIN                  COMMUNICATION

SEV-1      Immediate           T+0:  SOC Analyst → SOC Manager   T+0:   CISO notification
(Critical) (24x7 on-call)      T+15m: CISO → CTO/CEO            T+30m: Exec bridge call
                                T+30m: IR Lead activated          T+1h:  Legal counsel
                                T+1h:  External IR retainer       T+4h:  Board notification
                                T+24h: Regulatory counsel         T+72h: Regulatory filing
                                                                         (if required)

SEV-2      < 30 minutes        T+0:  SOC Analyst → SOC Lead      T+0:   SOC internal
(High)     (24x7 on-call)      T+1h: SOC Lead → Security Dir     T+1h:  CISO briefing
                                T+4h: IR team fully engaged       T+4h:  CTO notification
                                T+24h: Status to CISO             Daily: Status updates

SEV-3      < 2 hours           T+0:  SOC Analyst triage           T+0:   Ticket created
(Medium)   (business hours)    T+4h: Assign to security engineer  T+24h: Team standup update
                                T+24h: Update CISO if escalated    Weekly: Status report

SEV-4      < 8 hours           T+0:  SOC Analyst triage           Ticket created
(Low)      (business hours)    Standard investigation workflow     Weekly report inclusion

SEV-5      Next business day   Automated processing               Monthly metrics only
(Info)

8.3 Incident Classification Taxonomy

PRIMARY CATEGORIES

MALWARE
  ├── Ransomware (encryption, extortion, double-extortion)
  ├── Wiper (destructive)
  ├── RAT / Backdoor
  ├── Cryptominer
  ├── Worm (self-propagating)
  └── Dropper / Loader

UNAUTHORIZED ACCESS
  ├── Credential compromise (phishing, stuffing, brute force)
  ├── Privilege escalation
  ├── Insider threat (malicious, negligent)
  ├── Third-party/supply chain compromise
  └── Physical unauthorized access

DATA INCIDENT
  ├── Confirmed breach (exfiltration confirmed)
  ├── Exposure (data accessible but exfiltration unconfirmed)
  ├── Loss (physical media, unencrypted backup)
  └── Unauthorized disclosure (accidental email, misconfigured storage)

DENIAL OF SERVICE
  ├── Volumetric DDoS
  ├── Application-layer attack
  ├── Resource exhaustion
  └── Ransomware-induced outage

POLICY VIOLATION
  ├── Acceptable use violation
  ├── Unauthorized software/shadow IT
  ├── Data handling violation
  └── Access control bypass

8.4 Regulatory Notification Triggers

Regulation Notification Trigger Timeline To Whom
GDPR (Art. 33/34) Breach of personal data likely to result in risk to individuals 72 hours to DPA; undue delay to individuals if high risk Supervisory authority + data subjects
HIPAA (Breach Notification Rule) Unsecured PHI acquired, accessed, used, or disclosed 60 days to HHS; without unreasonable delay to individuals HHS OCR + affected individuals + media (if >500)
SEC (8-K Item 1.05) Material cybersecurity incident 4 business days after materiality determination SEC filing
CCPA/CPRA Breach of unencrypted PI affecting CA residents Expedient, without unreasonable delay Affected CA residents + AG (if >500)
NIS2 (EU) Significant incident affecting service delivery 24h early warning, 72h notification, 1 month final report CSIRT / competent authority
DORA (EU Financial) Major ICT-related incident Initial: immediately; Intermediate: 72h; Final: 1 month Competent authority
PCI DSS Breach involving cardholder data Immediately upon discovery Card brands + acquiring bank
State breach laws (US) Varies by state — typically unauthorized acquisition of PI Varies: 30-90 days depending on state AG + affected individuals

8.5 Incident Response Metrics

Metric Definition Target Measurement
MTTD Time from initial compromise to detection < 24h (industry median: ~7 days) Per-incident tracking
MTTR Time from detection to containment SEV-1: <4h, SEV-2: <24h Per-incident tracking
MTTC Time from detection to full eradication SEV-1: <72h Per-incident tracking
Incident volume Total incidents per period by severity Trending downward Monthly
Escalation accuracy Correctly classified incidents / total > 90% Quarterly review
Playbook coverage Incident types with documented playbooks > 90% Quarterly
Tabletop frequency Exercises conducted per year ≥ 4 (quarterly) Annual
After-action completion Lessons learned items completed on time > 80% Per-incident

9. GRC Framework Reference

9.1 NIST Cybersecurity Framework 2.0

Six core functions (new in 2.0: Govern added):

                    ┌─────────┐
                    │ GOVERN  │  ← NEW in CSF 2.0
                    │ (GV)    │  Organizational context, risk
                    │         │  management strategy, roles,
                    └────┬────┘  policies, oversight
                         │
     ┌──────────┬────────┼────────┬──────────┬──────────┐
     │          │        │        │          │          │
┌────┴───┐ ┌───┴────┐ ┌─┴──────┐ ┌┴────────┐ ┌┴────────┐
│IDENTIFY│ │PROTECT │ │ DETECT │ │ RESPOND │ │RECOVER  │
│ (ID)   │ │ (PR)   │ │ (DE)   │ │ (RS)    │ │ (RC)    │
│        │ │        │ │        │ │         │ │         │
│Asset   │ │Access  │ │Anomaly │ │Mgmt     │ │Recovery │
│mgmt,   │ │control,│ │& event │ │Analysis │ │planning,│
│risk    │ │training│ │detect, │ │Mitigat. │ │improve- │
│assess, │ │data    │ │continu-│ │Communic.│ │ments,   │
│supply  │ │security│ │ous mon.│ │Reporting│ │communic.│
│chain   │ │tech    │ │        │ │         │ │         │
└────────┘ └────────┘ └────────┘ └─────────┘ └─────────┘

Govern function categories: Organizational Context (GV.OC), Risk Management Strategy (GV.RM), Roles/Responsibilities/Authorities (GV.RR), Policy (GV.PO), Oversight (GV.OV), Cybersecurity Supply Chain Risk Management (GV.SC).

CSF 2.0 key changes: Universal applicability (not just critical infrastructure), Govern function elevates cybersecurity to enterprise risk, improved supply chain risk management, enhanced implementation guidance.

9.2 NIST Risk Management Framework (RMF)

Seven-step process (SP 800-37 Rev 2):

         ┌──────────┐
         │ PREPARE  │ ← Organization + system-level preparation
         └─────┬────┘
               │
         ┌─────▼────┐
         │CATEGORIZE│ ← FIPS 199 impact levels (Low/Moderate/High)
         └─────┬────┘
               │
         ┌─────▼────┐
         │ SELECT   │ ← SP 800-53 control baselines + tailoring
         └─────┬────┘
               │
         ┌─────▼────┐
         │IMPLEMENT │ ← Deploy controls, update SSP
         └─────┬────┘
               │
         ┌─────▼────┐
         │ ASSESS   │ ← SP 800-53A assessment procedures
         └─────┬────┘
               │
         ┌─────▼────┐
         │AUTHORIZE │ ← AO risk decision (ATO/DATO/ATT)
         └─────┬────┘
               │
         ┌─────▼────┐
         │ MONITOR  │ ← Continuous monitoring, ongoing authorization
         └──────────┘

9.3 ISO 27001:2022

ISMS (Information Security Management System) standard. Key clauses:

Clause Title Content
4 Context of the Organization Internal/external issues, interested parties, ISMS scope
5 Leadership Management commitment, policy, roles
6 Planning Risk assessment, risk treatment, objectives
7 Support Resources, competence, awareness, communication, documentation
8 Operation Risk assessment execution, risk treatment implementation
9 Performance Evaluation Monitoring, internal audit, management review
10 Improvement Nonconformities, corrective actions, continual improvement

Annex A: 93 controls organized in 4 themes (2022 revision):

  • Organizational controls (37)
  • People controls (8)
  • Physical controls (14)
  • Technological controls (34)

Certification: Over 70,000 valid certificates in 150+ countries. Requires Stage 1 (documentation review) and Stage 2 (implementation audit) by accredited certification body. Surveillance audits annually, recertification every 3 years.

9.4 CIS Critical Security Controls v8.1

18 controls organized by Implementation Group:

# Control IG1 IG2 IG3
1 Inventory and Control of Enterprise Assets X X X
2 Inventory and Control of Software Assets X X X
3 Data Protection X X X
4 Secure Configuration of Enterprise Assets and Software X X X
5 Account Management X X X
6 Access Control Management X X X
7 Continuous Vulnerability Management X X X
8 Audit Log Management X X X
9 Email and Web Browser Protections X X X
10 Malware Defenses X X X
11 Data Recovery X X X
12 Network Infrastructure Management X X X
13 Network Monitoring and Defense X X
14 Security Awareness and Skills Training X X X
15 Service Provider Management X X
16 Application Software Security X X
17 Incident Response Management X X X
18 Penetration Testing X

IG1 (56 safeguards): Basic cyber hygiene — minimum standard for all organizations. IG2: Builds on IG1 for organizations with moderate risk and IT complexity. IG3: All safeguards — for organizations facing sophisticated adversaries.

v8.1 updates (June 2024): Aligned with NIST CSF 2.0, added "Governance" security function, revised asset classifications, enhanced glossary.

9.5 COBIT 2019

IT Governance and Management framework from ISACA:

  • 40 governance and management objectives
  • Design factors for tailored implementation
  • Performance management through capability levels
  • Integration with NIST CSF, SOX, and other frameworks

Key differentiator: Bridges business goals to IT goals to governance objectives. Particularly strong for organizations needing IT governance alignment with business strategy and SOX compliance.

9.6 CISA Cyber Essentials

Six essential elements for foundational security (especially for small organizations):

  1. Yourself (Leadership): Drive cybersecurity strategy and investment
  2. Your Staff: Culture of security awareness
  3. Your Systems: Asset management, patching, secure configuration
  4. Your Surroundings: Access control, MFA, least privilege
  5. Your Data: Inventory, monitoring, backups, encryption
  6. Your Crisis Response: Incident response plans, BC/DR

9.7 Framework Mapping

Requirement NIST CSF 2.0 ISO 27001 CIS Controls NIST 800-53
Asset Management ID.AM A.5.9-5.13 CIS 1, 2 CM-8
Access Control PR.AA A.5.15-5.18, A.8.2-8.5 CIS 5, 6 AC-1 through AC-25
Vulnerability Mgmt ID.RA A.8.8 CIS 7 RA-5, SI-2
Security Monitoring DE.CM A.8.15-8.16 CIS 8, 13 AU-6, SI-4
Incident Response RS.MA A.5.24-5.28 CIS 17 IR-1 through IR-10
Risk Assessment GV.RM, ID.RA Clause 6.1, 8.2 N/A (cross-cutting) RA-1 through RA-9
Training PR.AT A.6.3 CIS 14 AT-1 through AT-4
Data Protection PR.DS A.5.33-5.36, A.8.10-8.12 CIS 3 SC-8, SC-28, MP-*
Supply Chain GV.SC A.5.19-5.23 CIS 15 SR-1 through SR-12
Governance GV.* Clause 5, 9, 10 N/A PM-, PL-

Appendix A: Security Program Maturity Assessment Checklist

FOUNDATIONAL (Level 1-2)
  □ Written information security policy
  □ Asset inventory (hardware + software)
  □ Risk assessment completed in last 12 months
  □ Vulnerability scanning operational
  □ MFA on all privileged accounts
  □ Security awareness training program
  □ Incident response plan documented
  □ Backup and recovery tested
  □ Roles and responsibilities defined

ESTABLISHED (Level 3)
  □ Formal risk management program with defined methodology
  □ Security architecture documented
  □ Continuous vulnerability management with SLAs
  □ SIEM operational with custom detection rules
  □ Third-party risk management program
  □ Tabletop exercises conducted quarterly
  □ Security metrics reported to leadership monthly
  □ Formal change management process
  □ Data classification scheme implemented

ADVANCED (Level 4)
  □ Quantitative risk assessment (FAIR) for top risks
  □ Detection engineering program (detection-as-code)
  □ Threat hunting program (hypothesis-driven)
  □ Red team / purple team exercises annually
  □ DevSecOps integrated (SAST, DAST, SCA in CI/CD)
  □ Board-level security reporting with KRIs
  □ Cyber insurance program aligned to risk quantification
  □ Automated compliance monitoring
  □ Formal threat intelligence program

OPTIMIZING (Level 5)
  □ Continuous risk quantification driving investment decisions
  □ ATT&CK-mapped detection coverage with gap analysis
  □ AI/ML-assisted security operations
  □ Chaos engineering / adversary simulation
  □ Security champion program across engineering
  □ Contribution to industry threat sharing (ISACs)
  □ Formal security research / bug bounty program
  □ Predictive risk analytics

Appendix B: Quick Reference — Common Compliance Frameworks by Industry

Industry Primary Frameworks Key Regulations
Financial Services NIST CSF, CIS, FFIEC, COBIT SOX, GLBA, PCI DSS, DORA (EU), NYDFS 23 NYCRR 500
Healthcare NIST CSF, HITRUST, CIS HIPAA, HITECH, FDA (medical devices)
Government (US) NIST RMF, FISMA, FedRAMP FISMA, CMMC (defense), EO 14028
Critical Infrastructure NIST CSF, C2M2, NERC CIP TSA Security Directives, CISA BODs
Technology / SaaS SOC 2, ISO 27001, CIS GDPR, CCPA, state privacy laws
Retail PCI DSS, NIST CSF, CIS PCI DSS, CCPA, state breach notification
EU (all sectors) ISO 27001, ENISA guidelines GDPR, NIS2, Cyber Resilience Act, AI Act

Last updated: 2026-03-14 Source material: NIST CSF 2.0, NIST RMF (SP 800-37), ISO 27001:2022, ISACA COBIT 2019, FAIR Institute, FIRST CVSS v4.0 Specification, OWASP Cheat Sheet Series, CISA Cyber Essentials, CIS Controls v8.1, SANS/CIS, CVE Program (cve.org), magoo/risk-measurement

Related Posts

  • Bruce Schneier Announces Speaking Schedule for 2026

    informationalMar 15, 2026
  • Weekly Threat Brief: March 8-15, 2026 — AI Weaponization Accelerates as Nation-States Shift Tactics

    criticalMar 15, 2026
  • Schneier's Friday Squid Blogging: Open Security Discussion Thread

    informationalMar 14, 2026
  • Security Roundup: Cloud Exploits Rise, Nonprofit Blind Spots, and Brazilian Banking Trojans

    mediumMar 14, 2026
  • Iran-Criminal Collusion, Spyware Policy Shifts, and Critical n8n Zero-Click Flaw

    highMar 13, 2026
PreviousCompliance
NextSecurity Metrics

On this page

  • Table of Contents
  • 1. Risk Assessment Methodology
  • 1.1 Qualitative Risk Assessment
  • 1.2 Quantitative Risk Assessment: FAIR Framework
  • 1.3 Risk Registers
  • 2. Security Program Maturity Models
  • 2.1 CMMI-Based Security Maturity
  • 2.2 SOC Maturity Model
  • 2.3 NIST CSF Implementation Tiers
  • 2.4 C2M2 (Cybersecurity Capability Maturity Model)
  • 3. Board-Level Reporting
  • 3.1 Principles of Board Communication
  • 3.2 Board Reporting Framework
  • 3.3 Key Risk Indicators (KRIs) for the Board
  • 3.4 Risk Appetite Statement
  • 4. CVSS v4.0 Scoring Guide
  • 4.1 Metric Groups Overview
  • 4.2 Base Metrics (All Mandatory)
  • 4.3 Threat Metrics
  • 4.4 Environmental Metrics
  • 4.5 Supplemental Metrics (Do Not Affect Score)
  • 4.6 Nomenclature
  • 4.7 Severity Rating Scale
  • 4.8 Vector String Format
  • 4.9 Key Differences: CVSS v4.0 vs v3.1
  • 5. Vulnerability Management Lifecycle
  • 5.1 Lifecycle Phases
  • 5.2 Discovery
  • 5.3 Risk-Based Prioritization
  • 5.4 Vulnerability Disclosure
  • 5.5 CVE Program Integration
  • 5.6 Vulnerability Management Metrics
  • 6. Third-Party Risk Management
  • 6.1 TPRM Lifecycle
  • 6.2 Vendor Tiering
  • 6.3 Assessment Components
  • 6.4 Contractual Security Requirements
  • 6.5 Supply Chain Security for Software
  • 6.6 Concentration Risk
  • 7. Security Budget Justification Frameworks
  • 7.1 Risk-Based ROI Model
  • 7.2 Security Budget Benchmarking
  • 7.3 Budget Justification Approaches
  • 7.4 Budget Allocation Model
  • 8. Incident Classification and Escalation Matrices
  • 8.1 Incident Severity Classification
  • 8.2 Escalation Matrix
  • 8.3 Incident Classification Taxonomy
  • 8.4 Regulatory Notification Triggers
  • 8.5 Incident Response Metrics
  • 9. GRC Framework Reference
  • 9.1 NIST Cybersecurity Framework 2.0
  • 9.2 NIST Risk Management Framework (RMF)
  • 9.3 ISO 27001:2022
  • 9.4 CIS Critical Security Controls v8.1
  • 9.5 COBIT 2019
  • 9.6 CISA Cyber Essentials
  • 9.7 Framework Mapping
  • Appendix A: Security Program Maturity Assessment Checklist
  • Appendix B: Quick Reference — Common Compliance Frameworks by Industry