BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • Overview
  • Web Security
  • API Exploitation
  • Active Directory
  • Windows Internals
  • Linux Exploitation
  • Network Attacks
  • Cloud Attacks
  • Kubernetes Attacks
  • C2 & Post-Exploitation
  • Red Team Infrastructure
  • Evasion Techniques
  • Shells Arsenal
  • Password Attacks
  • Phishing & Social Eng
  • Social Engineering
  • Exfiltration & Tunneling
  • Binary Exploitation
  • Wireless & IoT
  • Blockchain & Web3
  • Malware & Evasion
  • Vulnerability Research
  • Bug Bounty
  • Attack Chains
  • Pentest Cheatsheet
  • Pentest Reporting
  • Overview
  • Web Security
  • API Exploitation
  • Active Directory
  • Windows Internals
  • Linux Exploitation
  • Network Attacks
  • Cloud Attacks
  • Kubernetes Attacks
  • C2 & Post-Exploitation
  • Red Team Infrastructure
  • Evasion Techniques
  • Shells Arsenal
  • Password Attacks
  • Phishing & Social Eng
  • Social Engineering
  • Exfiltration & Tunneling
  • Binary Exploitation
  • Wireless & IoT
  • Blockchain & Web3
  • Malware & Evasion
  • Vulnerability Research
  • Bug Bounty
  • Attack Chains
  • Pentest Cheatsheet
  • Pentest Reporting
  1. CIPHER
  2. /Offensive
  3. /Penetration Testing Methodology, Reporting, and Engagement Management

Penetration Testing Methodology, Reporting, and Engagement Management

Penetration Testing Methodology, Reporting, and Engagement Management

CIPHER training module — deep dive into professional pentest methodology, report writing, severity rating systems, and engagement lifecycle management.

Sources: PTES, OWASP WSTG, MITRE ATT&CK, CVSS v3.1/v4.0, OWASP Risk Rating, DREAD, Cure53/NCC Group/Bishop Fox public reports, SysReptor, PwnDoc, Serpico.


Table of Contents

  1. PTES Methodology Phases
  2. OWASP WSTG Testing Structure
  3. Report Structure Templates
  4. Finding Severity Rating Systems
  5. Executive Summary Writing
  6. Remediation Priority Frameworks
  7. Rules of Engagement Templates
  8. Scoping Checklists
  9. MITRE ATT&CK Mapping for Reports
  10. Vulnerability Disclosure Process
  11. Reporting Tools and Automation
  12. Quality Assurance Checklist

1. PTES Methodology Phases

The Penetration Testing Execution Standard defines seven phases. Each phase produces deliverables that feed into the next. The methodology is iterative -- findings in later phases may trigger returns to earlier ones.

Phase 1: Pre-Engagement Interactions

Objective: Establish scope, authorization, rules, and logistics before any testing begins.

Key Activities:

  • Define scope boundaries (IP ranges, domains, applications, physical locations)
  • Establish rules of engagement (testing windows, off-limits systems, escalation procedures)
  • Obtain written authorization (signed statement of work, penetration testing agreement)
  • Define communication channels and emergency contacts
  • Agree on testing methodology and tools
  • Set timeline and milestones
  • Address legal and compliance requirements
  • Define acceptable social engineering boundaries (if in scope)
  • Establish data handling and evidence retention policies

Deliverables:

  • Signed Rules of Engagement document
  • Scope definition document
  • Emergency contact list
  • Testing timeline / project plan
  • NDA and liability agreements

Phase 2: Intelligence Gathering

Objective: Collect information about the target to identify attack surface and potential entry points.

Key Activities:

  • Passive reconnaissance: OSINT, DNS enumeration, WHOIS, certificate transparency logs, social media profiling, job postings, public documents, GitHub/code repository searches
  • Active reconnaissance: Port scanning, service enumeration, banner grabbing, web spidering, directory brute-forcing
  • Technology profiling: Web frameworks, CMS identification, WAF detection, CDN identification
  • Organizational intelligence: Employee names, email formats, org charts, vendor relationships
  • Infrastructure mapping: IP ranges, ASN ownership, cloud provider identification, subdomain enumeration

Deliverables:

  • Target inventory (hosts, services, applications)
  • Technology stack profile
  • Network topology diagram
  • Personnel / email list
  • Attack surface map

Phase 3: Threat Modeling

Objective: Identify high-value targets and viable attack paths based on gathered intelligence.

Key Activities:

  • Identify business-critical assets and data stores
  • Map data flows and trust boundaries
  • Apply STRIDE analysis to each component
  • Identify threat actors relevant to the target (nation-state, criminal, insider, opportunistic)
  • Prioritize attack paths by likelihood and impact
  • Create attack trees for high-value targets
  • Cross-reference with MITRE ATT&CK techniques

Deliverables:

  • Threat model document with data flow diagrams
  • Prioritized attack path list
  • Asset criticality ratings
  • STRIDE analysis table

Phase 4: Vulnerability Analysis

Objective: Identify, validate, and classify vulnerabilities across the attack surface.

Key Activities:

  • Automated vulnerability scanning (Nessus, OpenVAS, Qualys)
  • Manual vulnerability verification (eliminate false positives)
  • Web application testing (OWASP WSTG methodology)
  • Configuration review (CIS Benchmarks, vendor hardening guides)
  • Source code review (if white-box)
  • Protocol analysis and traffic inspection
  • Credential testing (default credentials, password spraying)
  • SSL/TLS assessment (certificate validity, cipher suites, protocol versions)

Deliverables:

  • Validated vulnerability inventory
  • False positive exclusion list
  • Vulnerability-to-exploit mapping
  • Risk-ranked vulnerability list

Phase 5: Exploitation

Objective: Demonstrate real-world impact by exploiting validated vulnerabilities to gain unauthorized access.

Key Activities:

  • Exploit development or adaptation for identified vulnerabilities
  • Controlled exploitation with minimal disruption
  • Credential harvesting and reuse
  • Privilege escalation (horizontal and vertical)
  • Authentication bypass
  • Injection attacks (SQL, command, LDAP, template)
  • Client-side attacks (XSS, CSRF, clickjacking)
  • Network-level attacks (LLMNR poisoning, SMB relay, ARP spoofing)
  • Social engineering execution (if in scope)
  • Document all exploitation steps with timestamps and evidence

Deliverables:

  • Exploitation log with timestamps
  • Screenshots and proof-of-concept artifacts
  • Access level achieved per system
  • Credential dump inventory (redacted appropriately)

Phase 6: Post-Exploitation

Objective: Determine the value of compromised systems and maintain access to demonstrate persistence and lateral movement.

Key Activities:

  • Determine value of compromised data (PII, credentials, financial, IP)
  • Lateral movement to adjacent systems
  • Privilege escalation to domain/root level
  • Persistence mechanism deployment (demonstrate, then remove)
  • Data exfiltration simulation
  • Pivoting through network segments
  • Active Directory enumeration and attack (Kerberoasting, DCSync, Golden Ticket)
  • Cloud environment enumeration (IAM, storage, secrets)
  • Assess blast radius from compromised position
  • Clean up: remove all tools, backdoors, and artifacts

Deliverables:

  • Network pivot map
  • Data access inventory (what was reachable)
  • Persistence mechanisms documented
  • Lateral movement timeline
  • Cleanup verification log

Phase 7: Reporting

Objective: Produce a professional deliverable that communicates findings, impact, and remediation to both technical and executive audiences.

Key Activities:

  • Compile all findings with evidence
  • Assign severity ratings (CVSS, DREAD, or custom)
  • Map findings to compliance frameworks (PCI DSS, HIPAA, SOC 2)
  • Map findings to MITRE ATT&CK techniques
  • Write executive summary for non-technical audience
  • Write technical findings with reproduction steps
  • Develop remediation recommendations with priority ranking
  • Create attack narrative showing kill chain progression
  • Produce appendices (tool output, methodology description)
  • Internal QA review before delivery
  • Present findings in debrief meeting

Deliverables:

  • Final penetration testing report (see Section 3 for template)
  • Executive presentation deck
  • Finding tracker / remediation roadmap
  • Raw evidence archive (under NDA retention terms)

2. OWASP WSTG Testing Structure

The OWASP Web Security Testing Guide organizes testing into 12 categories with 80+ specific tests. Each test has a unique ID for traceability in reports.

Testing Categories Summary

ID Category Test Count Focus
WSTG-INFO Information Gathering 10 Recon, fingerprinting, entry points
WSTG-CONF Configuration & Deployment 11 Infrastructure hardening, misconfig
WSTG-IDNT Identity Management 5 Roles, registration, provisioning
WSTG-ATHN Authentication 10 Login, lockout, credential transport
WSTG-ATHZ Authorization 4 Access control, privilege escalation
WSTG-SESS Session Management 9 Cookies, fixation, CSRF, timeout
WSTG-INPV Input Validation 19 XSS, SQLi, injection, SSRF, SSTI
WSTG-ERRH Error Handling 2 Information leakage via errors
WSTG-CRYP Cryptography 4 TLS, padding oracle, weak crypto
WSTG-BUSL Business Logic 9 Workflow bypass, abuse cases
WSTG-CLNT Client-Side 13 DOM XSS, clickjacking, WebSockets
WSTG-APIT API Testing 1+ GraphQL, REST API security

Using WSTG IDs in Reports

Reference specific tests in findings for traceability:

[FINDING-003]
WSTG Reference : WSTG-INPV-001 (Reflected Cross-Site Scripting)
Description    : The search parameter reflects user input without encoding...

This allows clients to cross-reference your findings with the OWASP standard and verify coverage completeness.


3. Report Structure Templates

Template A: Full Penetration Test Report

COVER PAGE
  - Report title
  - Client name and logo
  - Assessment type (External/Internal/Web App/API/Mobile/Social Engineering)
  - Assessment dates
  - Report version and date
  - Classification (Confidential)
  - Assessor firm name and logo

DOCUMENT CONTROL
  - Version history table
  - Distribution list
  - Document handling instructions

TABLE OF CONTENTS

1. EXECUTIVE SUMMARY (1-2 pages)
   1.1 Engagement Overview
   1.2 Scope Summary
   1.3 Key Findings Summary (risk heatmap or severity bar chart)
   1.4 Strategic Risk Assessment
   1.5 Top Recommendations (3-5 highest priority)
   1.6 Overall Security Posture Rating

2. ENGAGEMENT OVERVIEW
   2.1 Objectives
   2.2 Scope Definition
       - In-scope targets (IP ranges, URLs, applications)
       - Out-of-scope items
       - Testing restrictions
   2.3 Methodology (reference PTES/OWASP)
   2.4 Testing Timeline
   2.5 Team Composition
   2.6 Tools Used
   2.7 Limitations and Constraints

3. FINDINGS SUMMARY
   3.1 Severity Distribution Table
       | Severity | Count |
       |----------|-------|
       | Critical |   2   |
       | High     |   5   |
       | Medium   |   8   |
       | Low      |   4   |
       | Info     |   3   |

   3.2 Findings Summary Table
       | ID | Title | Severity | CVSS | Location | Status |
   3.3 Risk Heat Map (Likelihood vs Impact matrix)
   3.4 ATT&CK Coverage Map (if applicable)

4. TECHNICAL FINDINGS (per finding)
   4.1 [FINDING-001] Title
       - Severity / CVSS Score / CVSS Vector
       - CWE Classification
       - MITRE ATT&CK Mapping
       - OWASP WSTG Reference (for web findings)
       - Affected Systems / URLs / Endpoints
       - Description (what the vulnerability is, why it matters)
       - Proof of Concept (step-by-step reproduction)
         - HTTP requests/responses
         - Screenshots
         - Code snippets
       - Business Impact
       - Remediation (specific, actionable fix)
       - Verification Steps (how to confirm the fix works)
       - References (CVE, CWE, vendor advisory, OWASP page)

5. ATTACK NARRATIVE (optional, high-value)
   5.1 Kill chain walkthrough showing how findings chain together
   5.2 Timeline of compromise
   5.3 Lateral movement diagram
   5.4 Data access achieved

6. REMEDIATION ROADMAP
   6.1 Immediate Actions (0-30 days) — Critical/High findings
   6.2 Short-Term (30-90 days) — Medium findings + architecture changes
   6.3 Long-Term (90-180 days) — Strategic improvements
   6.4 Remediation Priority Matrix

7. APPENDICES
   A. Detailed Methodology Description
   B. Tool List and Versions
   C. Severity Rating Definitions
   D. Sanitized Tool Output
   E. Testing Checklist (WSTG coverage matrix)
   F. Glossary of Terms
   G. Rules of Engagement (signed copy reference)

Template B: Finding Detail Block

+------------------------------------------------------------------+
| [FINDING-001] SQL Injection in User Search Endpoint              |
+------------------------------------------------------------------+
| Severity   : Critical                                            |
| CVSS 3.1   : 9.8 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)       |
| CWE        : CWE-89 (SQL Injection)                             |
| ATT&CK     : T1190 (Exploit Public-Facing Application)          |
| WSTG       : WSTG-INPV-005                                      |
| Location   : https://app.example.com/api/v2/users?search=       |
+------------------------------------------------------------------+
|                                                                  |
| DESCRIPTION                                                      |
| The search parameter in the user lookup API endpoint is          |
| concatenated directly into a SQL query without parameterization. |
| An unauthenticated attacker can inject arbitrary SQL to extract  |
| data, modify records, or execute OS commands via xp_cmdshell.    |
|                                                                  |
| PROOF OF CONCEPT                                                 |
| Request:                                                         |
|   GET /api/v2/users?search=' UNION SELECT                       |
|     username,password,null FROM admin_users-- HTTP/1.1           |
|   Host: app.example.com                                          |
|                                                                  |
| Response (200 OK):                                               |
|   {"users":[{"name":"admin","email":"$2b$12$hashed..."}]}        |
|                                                                  |
| [Screenshot: sqli-evidence-001.png]                              |
|                                                                  |
| BUSINESS IMPACT                                                  |
| Full database compromise. Attacker can extract all user          |
| credentials, PII, and financial records. Potential for           |
| GDPR Art. 33/34 notification obligations. Reputational damage.   |
|                                                                  |
| REMEDIATION                                                      |
| 1. Use parameterized queries / prepared statements               |
| 2. Implement input validation with allowlist approach            |
| 3. Apply least-privilege database accounts                       |
| 4. Deploy WAF rule as interim mitigation                         |
|                                                                  |
| VERIFICATION                                                     |
| Re-submit the PoC payload. Confirm the application returns       |
| an error or empty result set instead of injected data.           |
|                                                                  |
| REFERENCES                                                       |
| - https://cwe.mitre.org/data/definitions/89.html                |
| - https://cheatsheetseries.owasp.org/cheatsheets/               |
|   Query_Parameterization_Cheat_Sheet.html                        |
+------------------------------------------------------------------+

Template C: Executive Presentation Slide Structure

Slide 1: Title + Assessment Overview
Slide 2: Scope Summary (what was tested)
Slide 3: Methodology (1-2 sentences, high level)
Slide 4: Key Metrics (finding counts by severity, risk heatmap)
Slide 5: Top 3 Critical/High Findings (business impact language)
Slide 6: Attack Narrative (simplified kill chain diagram)
Slide 7: What Went Well (positive security observations)
Slide 8: Strategic Recommendations (3-5 bullet points)
Slide 9: Remediation Roadmap (timeline view)
Slide 10: Next Steps and Q&A

4. Finding Severity Rating Systems

4.1 CVSS v3.1 (Industry Standard)

Base Metrics — Exploitability:

Metric Values Description
Attack Vector (AV) Network / Adjacent / Local / Physical How the attacker reaches the target
Attack Complexity (AC) Low / High Conditions beyond attacker control
Privileges Required (PR) None / Low / High Access needed before exploitation
User Interaction (UI) None / Required Does a victim need to act?

Base Metrics — Scope:

Value Meaning
Unchanged (U) Impact limited to vulnerable component
Changed (C) Impact extends beyond vulnerable component

Base Metrics — Impact:

Metric Values Description
Confidentiality (C) None / Low / High Information disclosure
Integrity (I) None / Low / High Data modification
Availability (A) None / Low / High Service disruption

Temporal Metrics (optional):

  • Exploit Code Maturity (E): Not Defined / Unproven / PoC / Functional / High
  • Remediation Level (RL): Not Defined / Official Fix / Temporary Fix / Workaround / Unavailable
  • Report Confidence (RC): Not Defined / Unknown / Reasonable / Confirmed

Environmental Metrics (optional):

  • Modified Base Metrics (override any base value for target environment)
  • Security Requirements (CR/IR/AR): Low / Medium / High

Severity Ratings:

Rating Score Range
None 0.0
Low 0.1 - 3.9
Medium 4.0 - 6.9
High 7.0 - 8.9
Critical 9.0 - 10.0

Vector String Format: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H (score: 9.8)

4.2 CVSS v4.0 (Current Standard)

Key changes from v3.1:

  • Attack Requirements (AT): New metric capturing prerequisite conditions distinct from Attack Complexity
  • Supplemental Metrics: Safety, Automatable, Provider Urgency, Recovery, Value Density, Vulnerability Response Effort (contextual, do not affect score)
  • New nomenclature: CVSS-B (base only), CVSS-BT (base + threat), CVSS-BE (base + environmental), CVSS-BTE (all)
  • Scoring method: MacroVector-based lookup with interpolation (replaces v3.1 formula)
  • Separate impact assessment for vulnerable system vs. subsequent systems

4.3 DREAD Model

Originally developed at Microsoft. Useful for quick risk triage and threat modeling.

Factor Scale Description
Damage 0-10 How severe is the damage if exploited?
Reproducibility 0-10 How easy to reproduce the attack?
Exploitability 0-10 How easy to launch the attack?
Affected Users 0-10 How many users are impacted?
Discoverability 0-10 How easy to discover the vulnerability?

Score: Average of all five factors. Range 0-10.

Rating Score Range
Low 0 - 3.9
Medium 4.0 - 6.9
High 7.0 - 10.0

Criticism: Discoverability is controversial -- some organizations set it to 10 for all findings (assume attacker will find it). Lacks the precision of CVSS but useful for rapid prioritization and non-technical stakeholder discussions.

4.4 OWASP Risk Rating Methodology

Risk = Likelihood x Impact

Likelihood Factors (average scores 0-9):

Threat Agent Factors:

Factor Low (0-3) Medium (4-6) High (7-9)
Skill Level No technical skills Some technical skills Security penetration skills
Motive Low/no reward Possible reward High reward
Opportunity Full access/resources needed Some access needed No access/resources needed
Size Developers/sysadmins Authenticated users Anonymous internet users

Vulnerability Factors:

Factor Low (0-3) Medium (4-6) High (7-9)
Ease of Discovery Practically impossible Requires tooling Automated tools available
Ease of Exploit Theoretical Available tools Automated tools available
Awareness Unknown Hidden Public knowledge
Intrusion Detection Active detection in place Logged Not logged

Impact Factors (average scores 0-9):

Technical Impact:

Factor Low (0-3) Medium (4-6) High (7-9)
Confidentiality Minimal non-sensitive data Extensive non-sensitive data All data disclosed
Integrity Minimal slightly corrupt Extensive seriously corrupt All data totally corrupt
Availability Minimal secondary services Extensive primary services All services completely lost
Accountability Fully traceable Possibly traceable Completely anonymous

Business Impact:

Factor Low (1-3) Medium (4-6) High (7-9)
Financial Damage Less than cost to fix Significant effect on annual profit Bankruptcy
Reputation Damage Minimal damage Loss of major accounts Brand damage
Non-Compliance Minor violation Clear violation High profile violation
Privacy Violation One individual Thousands of people Millions of people

Risk Severity Matrix:

Impact HIGH Impact MEDIUM Impact LOW
Likelihood HIGH Critical High Medium
Likelihood MEDIUM High Medium Low
Likelihood LOW Medium Low Low

4.5 Custom Severity Rating (Recommended Hybrid)

For professional engagements, a custom severity definition in the report appendix provides clarity and avoids arguments about CVSS nuances.

CRITICAL (must fix immediately)
  Exploitable remotely with no authentication. Direct path to sensitive
  data compromise, full system takeover, or significant business disruption.
  Active exploitation in the wild or trivial to exploit.
  Examples: RCE, SQLi with data access, auth bypass to admin

HIGH (fix within 30 days)
  Exploitable with low complexity. Leads to significant data exposure,
  privilege escalation, or lateral movement capability. May require
  low-level authentication.
  Examples: IDOR to sensitive data, stored XSS in admin panel, SSRF to
  internal services

MEDIUM (fix within 90 days)
  Requires specific conditions or chaining with other vulnerabilities
  to achieve significant impact. May assist an attacker in further
  exploitation.
  Examples: Reflected XSS, verbose error messages with stack traces,
  missing rate limiting on auth, insecure direct object reference to
  non-sensitive data

LOW (fix within 180 days)
  Minimal direct security impact. May provide information useful to
  attackers or indicate defense-in-depth gaps.
  Examples: Missing security headers, server version disclosure,
  directory listing on non-sensitive paths

INFORMATIONAL (best practice recommendation)
  No direct vulnerability. Observations about security posture
  improvements, hardening opportunities, or architectural suggestions.
  Examples: HTTP to HTTPS redirect missing, cookie without SameSite
  attribute, outdated but non-vulnerable library version

5. Executive Summary Writing

The executive summary is the most-read section of any pentest report. It must communicate risk to non-technical decision-makers.

Principles

  1. Business language, not technical jargon. "An attacker could steal customer payment data" not "SQL injection in the /api/v2/payments endpoint allows UNION-based data exfiltration."
  2. Lead with risk, not findings. Start with what the organization stands to lose, not what you found.
  3. Be specific about impact. "An external attacker with no credentials could access 2.3 million customer records" is stronger than "critical vulnerabilities were found."
  4. Include positive observations. Note what the organization does well -- it builds trust and credibility.
  5. Provide strategic recommendations. 3-5 high-level actions, not a list of patches.
  6. Use visuals. A severity distribution chart and risk heatmap communicate faster than paragraphs.
  7. Keep it to 1-2 pages. Executives will not read more.

Executive Summary Template

EXECUTIVE SUMMARY

[Client Name] engaged [Assessor Firm] to conduct a [assessment type]
penetration test of [scope description] during [date range]. The
objective was to [identify vulnerabilities / assess security posture /
validate compliance controls / simulate adversarial attack].

OVERALL ASSESSMENT

[1-2 sentences summarizing overall security posture. Be direct.]

The assessment identified [X] vulnerabilities: [N] Critical, [N] High,
[N] Medium, [N] Low, and [N] Informational. The most significant
findings indicate [summary of worst-case business impact].

KEY RISK AREAS

1. [Business-impact statement for top finding]
2. [Business-impact statement for second finding]
3. [Business-impact statement for third finding]

POSITIVE OBSERVATIONS

- [Security control that worked well]
- [Good practice observed]
- [Effective defense encountered]

STRATEGIC RECOMMENDATIONS

1. [Highest-priority strategic action]
2. [Second-priority strategic action]
3. [Third-priority strategic action]

These recommendations are detailed in the Remediation Roadmap
(Section 6) with suggested timelines and responsible parties.

Common Executive Summary Mistakes

  • Too technical. If it contains CVE numbers, CVSS vectors, or code snippets, it is a technical finding, not an executive summary.
  • Too vague. "Several critical issues were found" communicates nothing. Quantify and describe impact.
  • No recommendations. Findings without actions are useless to executives.
  • Only negative. Failing to note positive observations makes the report adversarial instead of collaborative.
  • Too long. More than 2 pages means executives will skip it entirely.

6. Remediation Priority Frameworks

Framework 1: Risk-Based Priority Matrix

Combine severity with remediation effort to prioritize:

                    Low Effort          High Effort
                +------------------+------------------+
  High Severity | PRIORITY 1       | PRIORITY 2       |
                | Fix immediately  | Plan and resource |
                +------------------+------------------+
  Low Severity  | PRIORITY 3       | PRIORITY 4       |
                | Quick wins       | Accept or defer   |
                +------------------+------------------+

Framework 2: Time-Based Remediation Windows

Severity Remediation Window Verification
Critical 0-7 days (or immediate mitigation + 30-day fix) Retest required
High 30 days Retest required
Medium 90 days Spot-check retest
Low 180 days Self-attestation acceptable
Info Next development cycle No retest needed

Framework 3: CISA KEV-Aligned (for known exploited vulns)

If a finding maps to a CISA Known Exploited Vulnerability (KEV), it automatically escalates to Critical / 0-7 day remediation regardless of CVSS score. KEV presence indicates active exploitation in the wild.

Framework 4: Kill-Chain Disruption Priority

Prioritize fixes that break the most attack chains:

  1. Initial access controls — fixes that prevent the attacker from getting in at all
  2. Privilege escalation blockers — fixes that contain an attacker at low privilege
  3. Lateral movement barriers — fixes that prevent spread across systems
  4. Data access controls — fixes that protect crown jewels even if attacker is inside
  5. Detection improvements — fixes that reduce mean time to detect

Remediation Roadmap Template

REMEDIATION ROADMAP

Phase 1: Immediate (0-7 days)
  [ ] FINDING-001: Deploy WAF rule to block SQLi on /api/v2/users
  [ ] FINDING-003: Rotate compromised admin credentials
  [ ] FINDING-007: Disable exposed debug endpoint

Phase 2: Short-Term (7-30 days)
  [ ] FINDING-001: Implement parameterized queries (permanent fix)
  [ ] FINDING-002: Enforce MFA on all admin accounts
  [ ] FINDING-004: Patch Apache Struts to version X.Y.Z
  [ ] FINDING-005: Implement RBAC on API endpoints

Phase 3: Medium-Term (30-90 days)
  [ ] FINDING-006: Deploy network segmentation between DMZ and internal
  [ ] FINDING-008: Implement centralized logging and SIEM integration
  [ ] FINDING-009: Conduct access control review

Phase 4: Strategic (90-180 days)
  [ ] Implement zero-trust network architecture
  [ ] Deploy EDR across all endpoints
  [ ] Establish vulnerability management program with SLAs
  [ ] Schedule follow-up penetration test

7. Rules of Engagement Templates

Template: Rules of Engagement Document

RULES OF ENGAGEMENT
Penetration Testing Engagement

1. PARTIES
   Assessor   : [Firm name, address, primary contact]
   Client     : [Organization name, address, primary contact]
   Signed Date: [Date]

2. ENGAGEMENT TYPE
   [ ] External Network Penetration Test
   [ ] Internal Network Penetration Test
   [ ] Web Application Penetration Test
   [ ] API Security Assessment
   [ ] Mobile Application Security Assessment
   [ ] Wireless Network Assessment
   [ ] Social Engineering Assessment
   [ ] Physical Security Assessment
   [ ] Red Team Engagement
   [ ] Purple Team Exercise
   [ ] Cloud Security Assessment

3. SCOPE DEFINITION

   3.1 In-Scope Assets
       IP Ranges    : [e.g., 203.0.113.0/24]
       Domains      : [e.g., *.example.com]
       Applications : [e.g., https://app.example.com]
       Environments : [e.g., Production / Staging]
       Cloud Accounts: [e.g., AWS Account ID 123456789012]

   3.2 Out-of-Scope Assets
       [List specific exclusions]
       - Third-party hosted services (unless authorized)
       - [IP range / domain / system]

   3.3 Out-of-Scope Activities
       [ ] Denial of Service attacks
       [ ] Physical access attempts
       [ ] Social engineering of [specific departments]
       [ ] Production data modification
       [ ] [Other restrictions]

4. TESTING PARAMETERS

   4.1 Testing Window
       Start Date : [Date]
       End Date   : [Date]
       Hours      : [e.g., Business hours only / 24x7 / After hours only]
       Timezone   : [e.g., UTC-5]
       Blackout Dates: [e.g., month-end processing periods]

   4.2 Testing Approach
       [ ] Black Box (no prior knowledge)
       [ ] Grey Box (limited credentials/documentation provided)
       [ ] White Box (full access to source code, architecture docs)

   4.3 Credentials Provided
       [List any test accounts with access levels]

   4.4 Source IP Addresses
       [Assessor testing IPs for whitelist/monitoring purposes]

5. COMMUNICATION PROTOCOLS

   5.1 Primary Contacts
       Client Technical POC : [Name, phone, email]
       Client Management POC: [Name, phone, email]
       Assessor Lead        : [Name, phone, email]
       Assessor Backup      : [Name, phone, email]

   5.2 Emergency Contact (24/7)
       [Name, phone] — for system outage or critical finding

   5.3 Communication Channels
       Regular Updates : [Email / Slack / Teams]
       Encrypted Comms : [PGP / Signal / encrypted email]
       Status Reports  : [Daily / Weekly / On-demand]

   5.4 Critical Finding Notification
       Critical and High severity findings will be reported to the
       client within [4/8/24] hours of confirmed discovery via
       [encrypted email / phone call].

6. DATA HANDLING

   6.1 Evidence Retention
       All testing evidence will be retained for [30/60/90] days
       after report delivery, then securely destroyed.

   6.2 Data Classification
       All engagement data is classified as [CONFIDENTIAL].
       No client data will be stored on assessor systems beyond
       the retention period.

   6.3 Sensitive Data
       If assessors encounter [PII/PHI/PCI data], they will
       [document existence without exfiltrating / notify client
       immediately / redact in all evidence].

7. AUTHORIZATION

   The undersigned authorizes [Assessor Firm] to conduct penetration
   testing activities as described in this document against the
   systems listed in Section 3.1.

   This authorization is valid from [Start Date] to [End Date].

   Client Authorizer:
   Name      : ________________________
   Title     : ________________________
   Signature : ________________________
   Date      : ________________________

   Assessor Lead:
   Name      : ________________________
   Title     : ________________________
   Signature : ________________________
   Date      : ________________________

8. LEGAL

   8.1 This engagement is conducted under the terms of
       [Master Services Agreement / Statement of Work] dated [Date].

   8.2 The assessor carries professional liability insurance
       with coverage of [amount].

   8.3 The assessor will comply with all applicable laws and
       regulations during testing activities.

   8.4 The client confirms they have authority to authorize
       testing of all in-scope systems.

   8.5 For cloud-hosted assets, the client confirms compliance
       with cloud provider penetration testing policies:
       [ ] AWS  — https://aws.amazon.com/security/penetration-testing/
       [ ] Azure — https://www.microsoft.com/en-us/msrc/pentest-rules-of-engagement
       [ ] GCP  — https://support.google.com/cloud/answer/6262505

Get-Out-of-Jail Letter (Authorization Letter)

PENETRATION TESTING AUTHORIZATION

To Whom It May Concern:

This letter serves as authorization for [Assessor Name/Firm] to
conduct security testing activities against systems owned and
operated by [Client Organization] as specified in the Rules of
Engagement document dated [Date].

Testing is authorized during the period of [Start Date] through
[End Date].

Should you have questions about this engagement, please contact:

  [Client Contact Name]
  [Title]
  [Phone]
  [Email]

Signed,
[Client Authorizer Name]
[Title]
[Date]

8. Scoping Checklists

Checklist: External Network Penetration Test

SCOPE ITEMS
[ ] External IP ranges / CIDR blocks
[ ] External-facing domains and subdomains
[ ] DNS servers
[ ] Mail servers (MX records)
[ ] VPN gateways
[ ] Remote access portals (Citrix, RDP, VDI)
[ ] Cloud infrastructure (AWS, Azure, GCP public endpoints)
[ ] CDN endpoints
[ ] API gateways

QUESTIONS FOR CLIENT
[ ] Total number of external IP addresses?
[ ] Any systems in shared hosting environments?
[ ] Any third-party managed infrastructure?
[ ] Cloud provider penetration testing policies reviewed?
[ ] DDoS testing in scope? (almost always no)
[ ] Social engineering in scope?
[ ] Testing window restrictions?
[ ] Change freeze / blackout periods?
[ ] Existing security controls (WAF, IPS, CDN)?

Checklist: Internal Network Penetration Test

SCOPE ITEMS
[ ] Internal network segments / VLANs
[ ] Active Directory domain(s) and forest trust relationships
[ ] Internal applications and file shares
[ ] Database servers
[ ] Development / staging environments
[ ] Network infrastructure (routers, switches, firewalls — management interfaces)
[ ] Wireless networks
[ ] VoIP systems
[ ] OT/ICS systems (if in scope — handle with extreme care)
[ ] Printers and IoT devices

QUESTIONS FOR CLIENT
[ ] Number of internal hosts / network segments?
[ ] Active Directory forest/domain structure?
[ ] Access provided: physical, VPN, or jump box?
[ ] Domain credentials provided? (grey-box?)
[ ] Any segments off-limits (OT, medical, production)?
[ ] Network segmentation between IT and OT?
[ ] Existing EDR / host-based controls?
[ ] SIEM monitoring active? (will blue team be notified?)
[ ] Is this a stealth engagement or announced?

Checklist: Web Application Penetration Test

SCOPE ITEMS
[ ] Application URL(s) and environments (prod/staging)
[ ] API endpoints and documentation (Swagger/OpenAPI)
[ ] Authentication mechanisms and test accounts
[ ] User roles to test (admin, user, guest, etc.)
[ ] Third-party integrations
[ ] Payment processing flows
[ ] File upload functionality
[ ] WebSocket endpoints
[ ] Single-page application (SPA) frameworks

QUESTIONS FOR CLIENT
[ ] Number of unique pages / endpoints?
[ ] Number of user roles?
[ ] Authentication type (SSO, OAuth, SAML, local)?
[ ] Source code access available?
[ ] API documentation available?
[ ] Rate limiting in place?
[ ] WAF protecting the application?
[ ] Test data or production data?
[ ] Automated testing allowed (fuzzing, scanning)?
[ ] Known areas of concern?
[ ] Recent code changes or new features?

Checklist: Cloud Security Assessment

SCOPE ITEMS
[ ] Cloud provider(s): AWS / Azure / GCP / Other
[ ] Account IDs / subscription IDs / project IDs
[ ] IAM configuration review
[ ] Network security groups / firewall rules
[ ] Storage bucket permissions
[ ] Serverless functions (Lambda, Azure Functions, Cloud Functions)
[ ] Container registries and orchestration (EKS, AKS, GKE)
[ ] Secrets management (KMS, Key Vault, Secret Manager)
[ ] Logging and monitoring configuration
[ ] Multi-account / multi-subscription architecture

QUESTIONS FOR CLIENT
[ ] Cloud provider pentest policy acknowledged?
[ ] Read-only or read-write access to cloud console/API?
[ ] IaC templates available (Terraform, CloudFormation)?
[ ] Multi-region deployment?
[ ] Shared responsibility model understood?
[ ] Compliance requirements (SOC 2, PCI, HIPAA)?
[ ] Existing cloud security tooling (GuardDuty, Defender, SCC)?

Checklist: Red Team Engagement

SCOPE ITEMS
[ ] Objective-based goals (e.g., access CEO email, exfiltrate database)
[ ] All attack vectors: network, application, social engineering, physical
[ ] Duration (typically 2-6 weeks)
[ ] Threat actor profile to emulate
[ ] C2 infrastructure allowed?
[ ] Physical access attempts authorized?
[ ] Social engineering targets and boundaries
[ ] Implant / persistence authorized?

QUESTIONS FOR CLIENT
[ ] Who knows about the engagement? (need-to-know list)
[ ] Blue team aware? (announced vs. unannounced)
[ ] Deconfliction process (how to verify if detected activity is the red team)?
[ ] Maximum acceptable business disruption?
[ ] Trophy objectives (what constitutes "game over")?
[ ] Exfiltration simulation or actual data?
[ ] Legal review of social engineering scope completed?
[ ] Physical security assessment boundaries (buildings, data centers)?

9. MITRE ATT&CK Mapping for Reports

Enterprise ATT&CK Tactics (14 Columns)

ID Tactic Description
TA0043 Reconnaissance Information gathering about the target
TA0042 Resource Development Establishing attack infrastructure
TA0001 Initial Access Gaining entry to the network
TA0002 Execution Running attacker-controlled code
TA0003 Persistence Maintaining access across restarts
TA0004 Privilege Escalation Gaining higher-level permissions
TA0005 Defense Evasion Avoiding detection
TA0006 Credential Access Stealing credentials
TA0007 Discovery Learning about the environment
TA0008 Lateral Movement Moving through the network
TA0009 Collection Gathering target data
TA0011 Command and Control Communicating with compromised systems
TA0010 Exfiltration Stealing data out of the network
TA0040 Impact Disrupting or destroying systems/data

Mapping Findings to ATT&CK

For each finding, identify the technique used:

Finding: Default credentials on Jenkins server
ATT&CK : T1078.001 (Valid Accounts: Default Accounts)
Tactic : Initial Access (TA0001), Persistence (TA0003)

Finding: Kerberoasting attack yielded service account hashes
ATT&CK : T1558.003 (Steal or Forge Kerberos Tickets: Kerberoasting)
Tactic : Credential Access (TA0006)

Finding: LLMNR/NBT-NS poisoning captured NTLMv2 hashes
ATT&CK : T1557.001 (Adversary-in-the-Middle: LLMNR/mDNS/NBT-NS Poisoning)
Tactic : Credential Access (TA0006)

ATT&CK Coverage Heat Map

Include a matrix showing which ATT&CK techniques were tested and which produced findings. This helps clients understand:

  • What was tested (coverage)
  • Where defenses held (tested but no finding)
  • Where defenses failed (finding produced)
  • What was not tested (gaps for future assessment)

10. Vulnerability Disclosure Process

When findings involve third-party software or are discovered outside a contracted engagement, follow responsible disclosure protocols.

Three Disclosure Models

Model Process Risk
Private Report to vendor only, vendor controls publication Vendor may never fix or disclose
Full Publish all details immediately Users exposed before patch exists
Coordinated Private report, public disclosure after patch or deadline Balanced approach, industry standard

Coordinated Disclosure Timeline

Day 0   : Discover vulnerability
Day 1-3 : Write detailed report with PoC
Day 3-7 : Contact vendor via security@, security.txt, or bug bounty
Day 7   : Confirm vendor received report
Day 14  : Request vendor timeline for patch
Day 90  : Disclosure deadline (Google Project Zero standard)
Day 90+ : Publish advisory with or without vendor patch

Researcher Report Contents

  • Vulnerability description and type (CWE)
  • Affected versions
  • Step-by-step reproduction instructions
  • Proof-of-concept code (non-weaponized)
  • Impact assessment
  • Suggested remediation
  • Researcher contact information
  • Disclosure timeline expectations

Legal Considerations

  • Verify testing was authorized or on your own systems
  • Review safe harbor policies before testing
  • Avoid payment demands (can constitute extortion)
  • Document everything with timestamps
  • Consider anonymous disclosure if legal risk is unclear
  • Consult legal counsel for cross-jurisdiction issues

11. Reporting Tools and Automation

SysReptor

  • Type: Self-hosted or cloud pentest reporting platform
  • Input: Markdown content with HTML-designed templates
  • Output: PDF reports
  • Features: Finding management, multi-user collaboration, template library
  • Certifications: Templates for OSCP, CPTS, OSEP, OSED, OSWA, CDSA
  • Stack: Python + Vue + TypeScript
  • URL: https://github.com/Syslifters/sysreptor

PwnDoc

  • Type: Self-hosted pentest reporting application
  • Input: Structured vulnerability data entry
  • Output: Customizable DOCX reports
  • Features: Shared vulnerability database, multi-user audits, custom fields, multi-language support, reusable finding templates
  • Philosophy: Mutualize vulnerability data across team members to reduce documentation overhead
  • URL: https://github.com/pwndoc/pwndoc

Serpico (Archived)

  • Type: Self-hosted report generation tool (archived May 2020)
  • Input: Finding selection from template database
  • Output: DOCX reports with custom branding
  • Features: Template database, attachment storage, team collaboration, Word meta-language templates
  • Stack: JavaScript + HTML + Ruby, Docker deployment
  • URL: https://github.com/SerpicoProject/Serpico
  • Status: Read-only archive, seeking new maintainer

Other Notable Tools

Tool Description
Dradis Open-source reporting and collaboration platform
PlexTrac Commercial purple team and pentest management platform
AttackForge Commercial pentest management with workflow automation
Reconmap Open-source pentest collaboration platform
Ghostwriter SpecterOps reporting and infrastructure management
WriteHat Self-hosted pentest report generator by BlackLanternSecurity

12. Quality Assurance Checklist

Run this checklist before delivering any penetration testing report.

Content Quality

[ ] Every finding has a unique ID
[ ] Every finding has a severity rating with justification
[ ] Every finding has reproduction steps
[ ] Every finding has at least one screenshot or evidence artifact
[ ] Every finding has specific remediation (not generic advice)
[ ] Every finding maps to a CWE
[ ] Critical/High findings map to MITRE ATT&CK technique
[ ] No false positives included (all findings manually validated)
[ ] Attack narrative demonstrates real-world impact (if applicable)
[ ] Positive security observations included

Report Structure

[ ] Cover page with correct client name, dates, classification
[ ] Table of contents with correct page numbers
[ ] Executive summary is 1-2 pages maximum
[ ] Executive summary uses business language (no jargon)
[ ] Findings summary table present with sortable severity
[ ] Remediation roadmap with time-based phases
[ ] Appendix includes methodology description
[ ] Appendix includes severity rating definitions
[ ] Appendix includes tool list

Accuracy and Professionalism

[ ] Client name spelled correctly throughout
[ ] All dates accurate
[ ] No internal assessor notes left in document
[ ] No other client names present (from copy-paste errors)
[ ] Screenshots do not contain assessor PII or other client data
[ ] All URLs and IPs belong to the in-scope target
[ ] CVSS vectors calculate to the stated score
[ ] Grammar and spelling checked
[ ] Consistent formatting throughout
[ ] PDF metadata does not contain sensitive information

Evidence Handling

[ ] All screenshots are legible and annotated
[ ] Sensitive data (passwords, PII) redacted in evidence
[ ] Evidence files named consistently (finding-001-evidence-01.png)
[ ] Raw tool output sanitized before inclusion
[ ] No client production data included unnecessarily

Delivery

[ ] Report delivered via encrypted channel
[ ] Report password communicated via separate channel
[ ] Client acknowledged receipt
[ ] Debrief meeting scheduled
[ ] Evidence archive retention period documented
[ ] Retest date agreed upon (if applicable)

References

  • PTES: http://www.pentest-standard.org/
  • OWASP WSTG: https://owasp.org/www-project-web-security-testing-guide/stable/
  • OWASP Risk Rating: https://owasp.org/www-community/OWASP_Risk_Rating_Methodology
  • MITRE ATT&CK: https://attack.mitre.org/
  • CVSS v3.1: https://www.first.org/cvss/v3.1/specification-document
  • CVSS v4.0: https://www.first.org/cvss/v4.0/specification-document
  • OWASP Vulnerability Disclosure: https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability_Disclosure_Cheat_Sheet.html
  • Public Pentest Reports: https://github.com/juliocesarfort/public-pentesting-reports
  • Cure53 Reports: https://github.com/cure53/public-pentesting-reports
  • SysReptor: https://github.com/Syslifters/sysreptor
  • PwnDoc: https://github.com/pwndoc/pwndoc
  • Serpico: https://github.com/SerpicoProject/Serpico
PreviousPentest Cheatsheet

On this page

  • Table of Contents
  • 1. PTES Methodology Phases
  • Phase 1: Pre-Engagement Interactions
  • Phase 2: Intelligence Gathering
  • Phase 3: Threat Modeling
  • Phase 4: Vulnerability Analysis
  • Phase 5: Exploitation
  • Phase 6: Post-Exploitation
  • Phase 7: Reporting
  • 2. OWASP WSTG Testing Structure
  • Testing Categories Summary
  • Using WSTG IDs in Reports
  • 3. Report Structure Templates
  • Template A: Full Penetration Test Report
  • Template B: Finding Detail Block
  • Template C: Executive Presentation Slide Structure
  • 4. Finding Severity Rating Systems
  • 4.1 CVSS v3.1 (Industry Standard)
  • 4.2 CVSS v4.0 (Current Standard)
  • 4.3 DREAD Model
  • 4.4 OWASP Risk Rating Methodology
  • 4.5 Custom Severity Rating (Recommended Hybrid)
  • 5. Executive Summary Writing
  • Principles
  • Executive Summary Template
  • Common Executive Summary Mistakes
  • 6. Remediation Priority Frameworks
  • Framework 1: Risk-Based Priority Matrix
  • Framework 2: Time-Based Remediation Windows
  • Framework 3: CISA KEV-Aligned (for known exploited vulns)
  • Framework 4: Kill-Chain Disruption Priority
  • Remediation Roadmap Template
  • 7. Rules of Engagement Templates
  • Template: Rules of Engagement Document
  • Get-Out-of-Jail Letter (Authorization Letter)
  • 8. Scoping Checklists
  • Checklist: External Network Penetration Test
  • Checklist: Internal Network Penetration Test
  • Checklist: Web Application Penetration Test
  • Checklist: Cloud Security Assessment
  • Checklist: Red Team Engagement
  • 9. MITRE ATT&CK Mapping for Reports
  • Enterprise ATT&CK Tactics (14 Columns)
  • Mapping Findings to ATT&CK
  • ATT&CK Coverage Heat Map
  • 10. Vulnerability Disclosure Process
  • Three Disclosure Models
  • Coordinated Disclosure Timeline
  • Researcher Report Contents
  • Legal Considerations
  • 11. Reporting Tools and Automation
  • SysReptor
  • PwnDoc
  • Serpico (Archived)
  • Other Notable Tools
  • 12. Quality Assurance Checklist
  • Content Quality
  • Report Structure
  • Accuracy and Professionalism
  • Evidence Handling
  • Delivery
  • References