BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
CIPHER
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
CIPHERThreat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  • Overview
  • Web Security
  • API Exploitation
  • Active Directory
  • Windows Internals
  • Linux Exploitation
  • Network Attacks
  • Cloud Attacks
  • Kubernetes Attacks
  • C2 & Post-Exploitation
  • Red Team Infrastructure
  • Evasion Techniques
  • Shells Arsenal
  • Password Attacks
  • Phishing & Social Eng
  • Social Engineering
  • Exfiltration & Tunneling
  • Binary Exploitation
  • Wireless & IoT
  • Blockchain & Web3
  • Malware & Evasion
  • Vulnerability Research
  • Bug Bounty
  • Attack Chains
  • Pentest Cheatsheet
  • Pentest Reporting
  • Overview
  • Web Security
  • API Exploitation
  • Active Directory
  • Windows Internals
  • Linux Exploitation
  • Network Attacks
  • Cloud Attacks
  • Kubernetes Attacks
  • C2 & Post-Exploitation
  • Red Team Infrastructure
  • Evasion Techniques
  • Shells Arsenal
  • Password Attacks
  • Phishing & Social Eng
  • Social Engineering
  • Exfiltration & Tunneling
  • Binary Exploitation
  • Wireless & IoT
  • Blockchain & Web3
  • Malware & Evasion
  • Vulnerability Research
  • Bug Bounty
  • Attack Chains
  • Pentest Cheatsheet
  • Pentest Reporting
  1. CIPHER
  2. /Offensive
  3. /Social Engineering Deep Dive — The Human Attack Surface

Social Engineering Deep Dive — The Human Attack Surface

Social Engineering Deep Dive — The Human Attack Surface

CIPHER Training Module | Classification: Authorized Security Training Sources: SEORG Framework, MITRE ATT&CK, TrustedSec SET, GoPhish, OWASP Last Updated: 2026-03-14


Table of Contents

  1. Psychological Foundations: Cialdini's Principles
  2. Pretexting Development Methodology
  3. Elicitation Techniques
  4. Vishing — Voice Phishing
  5. Smishing — SMS Phishing
  6. Email Phishing Taxonomy
  7. Physical Social Engineering
  8. Watering Hole Attacks
  9. Callback Phishing (BazarCall)
  10. QR Code Phishing (Quishing)
  11. Deepfake Threats in Social Engineering
  12. Tooling and Frameworks
  13. MITRE ATT&CK Mapping
  14. Defense: Security Awareness Program Design

1. Psychological Foundations: Cialdini's Principles

Social engineering exploits human psychology rather than technical vulnerabilities. The Social-Engineer.org Framework defines influence as "the process of getting someone else to want to do, react, think, or believe in the way you want them to." [CONFIRMED]

1.1 The 6+2 Principles of Influence

Robert Cialdini's original six principles, plus two additions recognized in the SE community:

1. Reciprocity

Principle: People feel obligated to return favors. A gift or concession creates psychological debt.

SE Application:

  • Attacker provides "free" security audit results, then requests network access to "verify findings"
  • Visher offers to "help fix" a fabricated IT issue, then requests credentials
  • Phishing email delivers a "free report" requiring login to download

Why it works: The reciprocity norm is deeply embedded across cultures. Refusing a return favor creates cognitive dissonance. [CONFIRMED — Cialdini, 2001]

2. Commitment and Consistency

Principle: Once people commit to something (even trivially), they behave consistently with that commitment.

SE Application:

  • Start with small, innocuous requests ("Can you confirm your department?") then escalate ("And your employee ID for verification?")
  • Get target to agree they value security, then leverage that commitment to extract "verification" information
  • Foot-in-the-door technique: small initial compliance leads to larger requests

Why it works: Inconsistency is psychologically uncomfortable. Once a position is taken, people rationalize continued compliance. [CONFIRMED]

3. Social Proof

Principle: People look to others' behavior to determine correct action, especially under uncertainty.

SE Application:

  • "Everyone in your department has already completed this verification"
  • Phishing landing pages showing "1,247 employees have already updated their credentials"
  • Impersonator in a building: "Sarah from accounting let me in last time"
  • Fake reviews/testimonials on credential harvesting sites

Why it works: Uncertainty + perceived peer behavior = compliance. Strongest when the "peers" are similar to the target. [CONFIRMED]

4. Authority

Principle: People defer to perceived experts and authority figures without independent verification.

SE Application:

  • Impersonating IT administrators, executives, auditors, law enforcement
  • Spoofed emails from "CEO" requesting urgent wire transfers (whaling/BEC)
  • Wearing uniforms, badges, carrying clipboards during physical SE
  • Using technical jargon to establish expertise credibility
  • Vishing as "bank fraud department" or "IRS agent"

Why it works: Milgram's obedience experiments demonstrated that ~65% of people will comply with authority directives even against their judgment. [CONFIRMED — Milgram, 1963]

5. Liking

Principle: People are more easily influenced by those they like — based on similarity, compliments, familiarity, or physical attractiveness.

SE Application:

  • Building rapport through shared interests discovered via OSINT (LinkedIn, social media)
  • Mirroring communication style, vocabulary, and body language
  • Complimenting the target's expertise: "I heard you're the go-to person for this"
  • Using humor and warmth to lower defenses
  • Name-dropping mutual connections

Why it works: The halo effect — positive feelings toward a person reduce critical evaluation of their requests. [CONFIRMED]

6. Scarcity

Principle: Perceived scarcity increases perceived value. Loss aversion is stronger than gain motivation.

SE Application:

  • "Your account will be locked in 24 hours if you don't verify"
  • "This security update expires at midnight"
  • "Only the first 50 employees get the new laptop model — confirm your details now"
  • Phishing emails with countdown timers
  • Vishing: "I can only hold this refund for the next 10 minutes"

Why it works: Loss aversion (Kahneman & Tversky) — people fear losing something ~2x more than they value gaining something equivalent. Urgency bypasses deliberate thought. [CONFIRMED]

7. Obligation (Extended Principle)

Principle: Creating a sense of duty or indebtedness that compels action.

SE Application:

  • "As part of your compliance requirements, you need to complete this form"
  • Framing requests as organizational obligations
  • Leveraging existing relationships: "Your manager asked me to reach out to you about this"
  • Creating artificial obligations through prior "favors"

Distinction from reciprocity: Obligation focuses on duty/role expectations rather than personal favor exchange. [CONFIRMED — SEORG Framework]

8. Concession

Principle: When someone makes a concession (retreats from a larger request), the other party feels compelled to reciprocate with a concession of their own.

SE Application:

  • Door-in-the-face technique: request full database access, "settle" for read-only credentials
  • "I originally needed to audit the entire server room, but just let me check this one rack"
  • Asking for SSN, then "accepting" just date of birth and employee ID as alternatives

Why it works: The retreat appears reasonable, and refusing a "reasonable" compromise feels socially unacceptable. [CONFIRMED]

1.2 Additional Psychological Levers

Lever Mechanism SE Use
Fear/Threat Amygdala hijack bypasses rational processing "Your system is compromised right now"
Curiosity Information gap theory drives completion seeking "See who viewed your profile" / USB baiting
Greed Reward seeking overrides caution Prize notifications, fake job offers
Helpfulness Desire to assist is socially reinforced Help desk exploitation, "I'm locked out"
Fatigue Decision fatigue degrades critical thinking Late-day/end-of-week attack timing
Distraction Cognitive load reduces vigilance Multi-step processes, time pressure + complexity
Trust transfer Trust in one entity transfers to associated entities Compromised vendor emails, supply chain pretexts

2. Pretexting Development Methodology

The SEORG Framework defines pretexting as "presenting oneself as someone else in order to obtain private information." It can extend to creating entirely new identities to manipulate information disclosure. [CONFIRMED]

2.1 Research Phase (OSINT Foundation)

Principle: "Good information gathering techniques can make or break a good pretext." Mimicking a tech support representative fails if the organization doesn't use external support. [CONFIRMED — SEORG]

Target Intelligence Collection

Source Intelligence Gathered Tools
LinkedIn Org chart, job titles, reporting structure, technology stack, recent hires LinkedIn Recruiter, PhantomBuster
Company website Products, services, partners, leadership, press releases HTTrack, Wayback Machine
Job postings Internal tools, security products, infrastructure details Indeed, Glassdoor
Social media Personal interests, travel, life events, communication style Maltego, SpiderFoot
SEC/Public filings Financials, acquisitions, key personnel, vendors EDGAR, OpenCorporates
Data breaches Email formats, credentials, internal docs HaveIBeenPwned, DeHashed
DNS/WHOIS Infrastructure, email servers, hosting Amass, Subfinder
Physical recon Building layout, badge types, visitor procedures, dumpster contents Camera, binoculars, patience

Organizational Intelligence Map

TARGET ORG
├── People
│   ├── C-Suite (names, assistants, email format)
│   ├── IT Staff (helpdesk numbers, ticketing system)
│   ├── HR/Finance (payroll system, benefits provider)
│   └── Target employees (role, tenure, interests)
├── Technology
│   ├── Email platform (O365, Google Workspace, Exchange)
│   ├── VPN/Remote access solution
│   ├── Helpdesk/ticketing system
│   └── Security tools (EDR, email gateway, MFA type)
├── Processes
│   ├── Visitor policy
│   ├── Password reset procedure
│   ├── Wire transfer approval chain
│   └── Vendor onboarding process
└── Culture
    ├── Communication style (formal/informal)
    ├── Dress code
    ├── Work hours and shift patterns
    └── Recent events (layoffs, mergers, moves)

2.2 Persona Creation

Persona Development Checklist

  1. Identity foundation: Name, role, organization, contact info (burner phone, spoofed email)
  2. Backstory: Employment history, reason for contact, relevant technical knowledge
  3. Motivation: Why is this person contacting the target? What is natural and expected?
  4. Knowledge boundaries: What should the persona know? What shouldn't they know?
  5. Emotional state: Rushed executive? Friendly vendor? Frustrated customer?
  6. Verification resilience: Can the persona withstand callback verification? Supervisor check?

Common Effective Pretexts

Pretext Authority Level Verification Risk Typical Target
IT Support / Helpdesk Medium Medium — may call IT End users
External Auditor High High — may verify with management Finance, IT
New Employee Low Low — expected to ask questions Helpdesk, HR
Vendor/Contractor Medium Medium — may check vendor list Facilities, IT
Executive Assistant High Medium — may verify with exec Finance, HR
Delivery Person Low Very Low Reception, facilities
Building Inspector High Low — rarely verified Facilities
Recruiter Medium Low Employees (off-guard)

2.3 Story Development

The Key Principle: Like "inserting the proper key in a lock," an effective pretext precisely matches situational expectations. [CONFIRMED — SEORG]

Story Architecture

HOOK     — Why am I contacting you? (natural, expected reason)
BUILDUP  — Establish credibility through knowledge demonstration
ASK      — The information or access request (embedded naturally)
URGENCY  — Time pressure or consequence that prevents deliberation
EXIT     — Clean disengagement that doesn't trigger retrospective suspicion

Story Validation Checks

  • Does this pretext make sense for this organization at this time?
  • Would this person realistically contact this target through this channel?
  • Can the persona answer follow-up questions without breaking character?
  • Is the request proportional to the relationship established?
  • Does the urgency feel natural, not artificial?

2.4 Props and Supporting Materials

Physical SE props:

  • Branded polo shirts, lanyards, badges (can be created from photos)
  • Clipboards, work orders, printed emails
  • Toolbelts, hard hats, safety vests (instant authority for facility access)
  • Branded vehicle magnets
  • Business cards (trivially printed)

Digital SE props:

  • Spoofed email domains (typosquatting: cornpany.com vs company.com)
  • Cloned login pages
  • Fake calendar invites
  • Spoofed caller ID
  • Professional-looking PDF reports or invoices

3. Elicitation Techniques

The SEORG Framework defines elicitation as "the act of obtaining information without directly asking for it." The NSA characterizes it similarly as extracting information from someone or something. References include the CIA's KUBARK interrogation manual. [CONFIRMED — SEORG]

3.1 Core Elicitation Tactics

Technique Mechanism Example
Deliberate False Statement People correct wrong information reflexively "I heard you're still running Windows Server 2012?" (target corrects: "No, we migrated to 2019 last year")
Flattery Ego gratification loosens information control "You clearly know more about this system than anyone — how does the authentication actually work?"
Mutual Interest Shared passion creates trust and openness Discussing industry conferences, then pivoting to internal tool discussions
Assumed Knowledge Acting as if you already know triggers confirmation "So the VPN termination is on the Palo Alto, right?"
Bracketing Providing a range forces correction to the real value "I'd guess you have somewhere between 500 and 5000 employees?" → "Actually, about 1,200"
Naivete Appearing ignorant encourages explanation "I'm sorry, I'm new to this — how does the badge access system work exactly?"
Quid Pro Quo Trading information creates reciprocal disclosure Sharing (fake) internal details to encourage target to share theirs
Oblique Reference Indirect mention prompts voluntary elaboration "Things must be crazy with the migration..." → target volunteers details

3.2 Preloading

Preloading involves seeding ideas or framing before the actual elicitation attempt:

  • Priming: Mentioning related concepts before asking (discussing "security breaches in the news" before asking about the target's security posture)
  • Anchoring: Establishing a reference point that biases subsequent responses
  • Context setting: Creating a conversational frame that makes disclosure feel natural
  • Emotional preloading: Establishing an emotional state (anxiety, trust, urgency) that facilitates compliance

3.3 Conversation Flow Model

Phase 1: RAPPORT    — Establish connection, find common ground (2-5 min)
Phase 2: TRANSITION — Natural topic bridge to area of interest
Phase 3: ELICIT     — Extract information using techniques above
Phase 4: VALIDATE   — Cross-reference with other questions/sources
Phase 5: EXIT       — Graceful disengage without raising suspicion

Critical rule: Never ask directly for what you want. Every target question should be embedded in a natural conversational context.


4. Vishing — Voice Phishing

4.1 Definition and Impact

SEORG defines vishing as "the practice of eliciting information or attempting to influence action via the telephone." Americans lost $29.8 million to phone scams in 2021 alone. [CONFIRMED — SEORG]

4.2 Core Vishing Techniques

Caller ID Spoofing

  • Services allow displaying arbitrary caller ID numbers
  • Spoof the target organization's own numbers, vendor numbers, or government agencies
  • VoIP services (Twilio, etc.) enable programmatic caller ID control

Voice Manipulation

  • AI voice synthesis: The 2019 CEO deepfake case used AI-synthesized voice impersonating a German executive to extract $243,000 [CONFIRMED — SEORG]
  • Voice changers to alter gender, age, accent
  • Background noise generators (office sounds, call center ambience)

The Mumble Technique

Speaking unclearly or impersonating impaired speech to bypass verification procedures. Support staff often accommodate rather than challenge. [CONFIRMED — SEORG]

4.3 Vishing Script Templates

IT Helpdesk Pretext

"Hi, this is [Name] from IT Support. We're seeing some unusual activity
on your workstation — [workstation name from OSINT]. Your system may
have been flagged by our security monitoring. I need to verify your
identity before I can look into this further. Can you confirm your
employee ID and the email address associated with your account?"

Escalation path: Verify identity → "install this remote support tool" → credential harvest

Bank Fraud Department Pretext

"Good afternoon, this is [Name] from [Bank] Fraud Prevention. We've
detected a suspicious transaction on your account ending in [last 4
from breach data]. A purchase of $847.32 at [location]. Did you
authorize this transaction? ... No? Okay, I need to verify your
identity to block the card. Can you confirm your date of birth and
the security question on your account?"

Vendor/Partner Pretext

"Hi, I'm calling from [Known Vendor]. We're doing our quarterly
security compliance review and need to verify some access
configurations. I have your organization listed as using our
[product] — can you confirm what version you're currently running
and who manages that system on your end?"

4.4 High-Risk Targets for Vishing

Target Why Vulnerable Information Sought
Help desk / Support Trained to be helpful Credentials, system info, password resets
Reception / Front desk Gatekeepers with limited security training Employee names, schedules, building layout
HR department Handle sensitive employee data PII, org structure, payroll info
Finance / AP Handle payments and transfers Banking details, approval processes
New employees Unfamiliar with processes and colleagues Everything — they don't know what's abnormal

4.5 Notable Vishing Incidents

Incident Year Technique Impact
Twitter hack 2020 Attackers vished employees, convinced support staff to reset passwords High-profile account compromise (Obama, Musk, etc.)
GoDaddy 2020 Support employees tricked into entering credentials on fraudulent login pages Domain hijacking of cryptocurrency services
CEO voice deepfake 2019 AI-synthesized voice of German executive $243,000 wire transfer
IRS scams Ongoing Spoofed numbers, fake badge references, threats of arrest Millions in annual losses
Tech support scams Ongoing Apple/Microsoft impersonation $55M+ losses in 2018

[CONFIRMED — SEORG Framework]


5. Smishing — SMS Phishing

5.1 Definition and Scale

SEORG defines SMiShing as "the act of using mobile phone text messages, SMS (Short Message Service), to lure victims into immediate action." Proofpoint documented a 300% increase in smishing during 2020. [CONFIRMED]

5.2 Why SMS is Effective

  • 98% open rate (vs ~20% for email)
  • No spam filters on most mobile devices (improving but lagging behind email)
  • Shorter URLs harder to evaluate on mobile screens
  • Users conditioned to act on SMS (2FA codes, delivery notifications)
  • Mobile browsers hide full URLs
  • Emotional association with personal device increases trust

5.3 Smishing Message Templates

Financial Alert

[BankName] ALERT: Unusual login detected on your account.
If this wasn't you, secure your account immediately:
https://bankname-secure[.]com/verify

Package Delivery

USPS: Your package #US9374728 has a delivery issue.
Update delivery preferences: https://usps-redelivery[.]com

MFA/Account Verification

Your [Company] verification code is 847291. If you did not
request this, someone may be accessing your account. Secure
it now: https://company-security[.]net

Prize/Reward

Congratulations! You've been selected for a $500 Amazon
gift card. Claim before midnight: https://amzn-rewards[.]com/claim

Government/Tax

IRS Notice: Your tax refund of $3,247.00 is pending.
Confirm your identity to process: https://irs-refund[.]com/verify

5.4 Smishing Infrastructure

  • Link shorteners: bit.ly, tinyurl, custom shorteners obscure destination
  • Disposable domains: Registered hours before campaign, burned after
  • SMS gateways: Bulk SMS services, often abused or compromised
  • Number rotation: Different source numbers to avoid carrier blocking
  • Geotargeting: Messages tailored to recipient's area code/region

5.5 Impact Statistics

  • FBI IC3 reported companies lost $54.2 million to social attacks including smishing in 2020 [CONFIRMED — SEORG]
  • 6+ billion global smartphone subscriptions create massive attack surface
  • COVID-19 pandemic massively expanded smishing (contact tracing, stimulus payments, vaccine appointments)

6. Email Phishing Taxonomy

6.1 Attack Categories

Mass Phishing

Untargeted, high-volume campaigns. Low effort per message, relies on volume.

Spear Phishing

Highly targeted using OSINT. Attackers research specific job roles and personal details via social media. Example: HR directors targeted by fake COO emails requesting payroll account changes. [CONFIRMED — SEORG]

Whaling

Targets C-suite executives with privileged access. Example: Ottawa City Treasurer received a convincing email from "her manager" requesting a $97,797 wire transfer for an acquisition. [CONFIRMED — SEORG]

Clone Phishing

Duplicates a legitimate email the target previously received, replacing attachments/links with malicious versions. Exploits existing trust in a known communication.

Business Email Compromise (BEC)

Compromises or spoofs executive email to request wire transfers, payroll changes, or sensitive data. FBI reported $2.4B in BEC losses in 2021.

Thread Hijacking

Adversaries compromise an email account, then insert malicious content into existing email conversation threads. Exploits established trust and context. [CONFIRMED — MITRE T1566]

6.2 Technical Manipulation Techniques

Domain Spoofing and Typosquatting

  • cornpany.com instead of company.com (m → rn)
  • support-amazon.com instead of support.amazon.com (dash vs dot)
  • microsoft-support.com instead of support.microsoft.com
  • Homograph attacks using Unicode: аpple.com (Cyrillic 'a')
  • IDN homograph: xn--pple-43d.com renders as аpple.com

[CONFIRMED — SEORG]

Current Events Exploitation

COVID-19 generated 71,000+ reported fraud cases totaling $89.51M in losses. Healthcare saw a 580% jump in ransomware via phishing during 2020. [CONFIRMED — SEORG]

6.3 Phishing Email Anatomy

From: IT Security <security@cornpany.com>        ← Spoofed/typosquat sender
Reply-To: attacker@proton.me                      ← Hidden reply-to
Subject: [URGENT] Password Expiration Notice       ← Urgency + authority
Date: Friday 4:45 PM                              ← End of day/week timing

Body:
- Company logo (scraped from website)              ← Visual legitimacy
- Professional formatting matching internal style  ← Consistency
- Specific details (employee name, department)     ← Personalization from OSINT
- Call to action with embedded link                ← Payload delivery
- Deadline creating urgency                        ← Scarcity/fear
- Consequences for non-compliance                  ← Authority + threat
- Signature matching real IT staff                 ← Authority + legitimacy

Footer:
- "This is an automated message"                   ← Discourages reply
- Unsubscribe link (also malicious)               ← Secondary payload

7. Physical Social Engineering

7.1 Tailgating / Piggybacking

Technique: Following an authorized person through a secured door or checkpoint without presenting credentials.

Variants:

  • Hands-full approach: Carry boxes, coffee, equipment — target holds door out of politeness
  • Group entry: Join a cluster of employees returning from lunch
  • Distracted follow: Appear on a phone call while walking closely behind authorized person
  • Delivery pretext: Carry branded packages to a door, wait for someone to open

Defense: Mantraps, badge-required turnstiles, security culture training ("it's okay to challenge")

7.2 Badge Cloning

RFID/NFC badge cloning:

  • Proxmark3 or similar devices read HID/MIFARE badges at short range (1-10cm standard, extended with custom antennas)
  • Badge data duplicated to blank cards in seconds
  • Long-range readers concealed in bags for covert capture
  • Some badges use only facility code + card number (no crypto) — trivially cloned

Countermeasures: Modern smart cards with mutual authentication (SEOS, DESFire EV3), badge + PIN, mobile credentials

7.3 Dumpster Diving

Target materials:

  • Printed documents (org charts, internal memos, financial reports)
  • Post-it notes with passwords
  • Old hard drives and USB media
  • Discarded badges and access cards
  • Shipping labels revealing vendor relationships
  • Building plans and network diagrams

Defense: Shredding policy (cross-cut minimum), locked dumpsters, secure media destruction, clean desk policy

7.4 Impersonation for Building Access

The 2016 Ortho Pest Control case: a hazmat suit and fake ID enabled building access and laptop theft. The Lukas Yla example (2016): researching legitimate delivery services, creating matching uniforms, and professional execution enabled infiltration of major tech companies. [CONFIRMED — SEORG]

Effective impersonation personas:

  • Pest control / Exterminators (people evacuate, leaving equipment unattended)
  • HVAC / Maintenance technicians
  • Fire inspectors / Building code inspectors
  • IT support / Cable technicians
  • Delivery services (FedEx, UPS, food delivery)

Key principle: Uniforms + confidence + clipboard = access. Rarely challenged. [CONFIRMED — SEORG]

7.5 USB Drop Attacks

  • Branded USB drives left in parking lots, lobbies, break rooms
  • Labels that trigger curiosity: "Executive Salary Review Q4", "Layoff List 2026"
  • USB Rubber Ducky / Bash Bunny for keystroke injection on plug-in
  • O.MG cables that appear as normal charging cables
  • Autorun payloads for data exfiltration

MITRE: T1091 (Replication Through Removable Media), T1204.002 (User Execution: Malicious File)


8. Watering Hole Attacks

8.1 MITRE ATT&CK T1189: Drive-by Compromise

Adversaries compromise legitimate websites frequented by target organizations to deliver malware. [CONFIRMED — MITRE]

8.2 Attack Chain

1. RECON    — Identify websites frequented by target org/industry
2. COMPROMISE — Inject malicious code into trusted website (XSS, supply chain)
3. PROFILE  — Browser/plugin fingerprinting to identify vulnerable visitors
4. EXPLOIT  — Deliver browser/plugin exploit to vulnerable visitors
5. PAYLOAD  — Execute code, establish persistence, begin lateral movement

8.3 Threat Group Usage

Group Technique Target
APT28 Custom exploit kits, reflected XSS Government, military
APT32 Compromised legitimate sites Southeast Asian targets
APT37 JavaScript profilers (RICECURRY), South Korean site compromise South Korean organizations
Lazarus Group Compromised legitimate websites Financial sector, defense
Dragonfly Custom exploit kits Energy sector

[CONFIRMED — MITRE T1189]

8.4 Social Engineering Element

The "social" component of watering holes:

  • Target trusts the website (it's their industry conference, trade publication, vendor portal)
  • No phishing email to scrutinize — the attack is invisible to the user
  • Often combined with SE to drive targets to specific compromised pages ("Check out this article about our industry")
  • Increasingly used with credential harvesting (fake login prompts on trusted sites)

8.5 Mitigations

  • Browser sandboxing and isolation
  • Exploit protection (Windows Defender Exploit Guard, EMET)
  • Script blocking extensions (uBlock Origin, NoScript)
  • Disable browser push notifications
  • Keep browsers and plugins updated
  • Network-level anomaly detection for unexpected external resource loads

[CONFIRMED — MITRE T1189]


9. Callback Phishing (BazarCall)

9.1 Concept

Callback phishing (pioneered by BazarCall/BazaCall campaigns) inverts the traditional phishing model: instead of embedding a malicious link, the email contains only a phone number. The victim initiates the call, which feels more natural and bypasses email link scanning.

Royal ransomware is documented as spreading through "phishing campaigns including callback phishing." [CONFIRMED — MITRE T1566]

9.2 Attack Flow

1. LURE EMAIL    — Invoice, subscription renewal, or charge notification
                   "Your subscription to [Service] has been renewed for $499.99"
                   "Call 1-800-XXX-XXXX to cancel within 24 hours"

2. VICTIM CALLS  — Target calls the number (they initiated, feels safe)

3. VISHING       — Operator guides victim through "cancellation process"
                   "To verify your account, please visit [URL]"
                   "I'll need to remote into your machine to process the refund"

4. PAYLOAD       — Remote access tool installed (AnyDesk, TeamViewer, ConnectWise)
                   OR credential harvesting via guided navigation to fake portal
                   OR malware downloaded under pretext of "cancellation form"

5. EXPLOITATION  — Ransomware deployment, data exfiltration, lateral movement

9.3 Why Callback Phishing is Effective

  • No malicious links or attachments — bypasses email security gateways
  • Victim initiates contact — psychologically feels safer than inbound
  • Real-time social engineering — operator adapts to target's responses
  • Urgency without suspicion — financial charge creates natural motivation to call
  • Legitimate remote tools — AnyDesk/TeamViewer won't trigger AV alerts

9.4 Defense

  • Train users to independently verify charges (log into account directly, not via email)
  • Monitor for unauthorized remote access tool installations
  • Block known callback phishing phone numbers (rapidly rotating)
  • Email filtering rules for common callback phishing patterns (invoice + phone number + no links)
  • EDR monitoring for remote access tool deployment following user-initiated actions

10. QR Code Phishing (Quishing)

10.1 Attack Vectors

QR codes are opaque to humans — you cannot visually inspect the destination URL before scanning.

Delivery methods:

  • Email: QR code embedded in email body bypasses URL scanning (it's an image, not a link)
  • Physical: Fake QR codes placed over legitimate ones (parking meters, restaurant menus, conference badges)
  • Documents: QR codes in printed materials, invoices, or compliance documents
  • Stickers: Malicious QR stickers placed on public surfaces, ATMs, transit systems

10.2 Quishing Campaign Design

PRETEXT: "Scan to enable MFA on your account" / "Scan for Wi-Fi access"
         "Updated parking payment system — scan to pay"
         "Scan to access the conference materials"

QR CODE → Credential harvesting page (mobile-optimized)
       → Malware download (mobile payload)
       → OAuth consent phishing
       → Attacker-controlled Wi-Fi captive portal

10.3 Why Quishing is Growing

  • QR adoption exploded post-COVID (menus, payments, check-ins)
  • Most email security gateways don't decode QR codes in images
  • Mobile devices have weaker security controls than desktops
  • Mobile browsers show truncated URLs, hiding malicious domains
  • Users are conditioned to "just scan" without questioning

10.4 SET Integration

The Social Engineer Toolkit (SET) includes a QR code generator attack vector that creates QR codes pointing to attacker-controlled infrastructure. [CONFIRMED — TrustedSec SET]

10.5 Defense

  • Mobile device management (MDM) with URL filtering
  • Security awareness training specifically covering QR risks
  • QR scanning apps that preview URLs before opening
  • Physical QR code tamper detection (does the sticker look overlaid?)
  • Email security gateways with QR code image analysis capability

11. Deepfake Threats in Social Engineering

11.1 Current State of the Threat

The 2019 CEO voice deepfake case demonstrated real-world impact: AI-synthesized voice impersonating a German executive extracted $243,000 via phone call. [CONFIRMED — SEORG]

11.2 Deepfake Attack Taxonomy

Audio Deepfakes (Voice Cloning)

  • Training data needed: As little as 3-5 seconds of audio (modern models)
  • Sources: Earnings calls, conference talks, YouTube, voicemail greetings, social media
  • Delivery: Real-time voice conversion during vishing calls
  • Tools: Open-source voice cloning (RVC, Tortoise-TTS, VALL-E derivatives)

Video Deepfakes

  • Real-time face swap: Used in video calls to impersonate executives
  • Pre-recorded: Fake video messages from leadership
  • 2024 Hong Kong case: Finance worker transferred $25M after a video call with "deepfake CFO" and multiple colleagues — all AI-generated [EXTERNAL — widely reported]

Combined Audio/Video

  • Full synthetic presence in video conferences
  • Fake "CEO town hall" recordings with malicious instructions
  • Synthetic media for social media account impersonation

11.3 Deepfake-Enhanced SE Scenarios

Scenario Risk Level Feasibility
Voice clone of CEO requesting wire transfer Critical High — proven in the wild
Video deepfake in Zoom/Teams call Critical High — demonstrated in 2024
Fake voicemail from manager High Very High — trivial with modern tools
Synthetic video for identity verification (KYC bypass) Critical Medium — improving rapidly
Fake audio evidence for social manipulation High Very High

11.4 Detection and Defense

Technical indicators:

  • Audio artifacts: unnatural pauses, breathing patterns, background noise consistency
  • Video artifacts: lip sync issues, blinking anomalies, lighting inconsistencies, edge artifacts
  • Metadata analysis: generation artifacts in file metadata

Process controls:

  • Out-of-band verification for any high-value request (call back on known number)
  • Code word / passphrase systems for executive authorization
  • Multi-party approval for financial transactions regardless of requester
  • Established escalation procedures that don't rely on voice/video alone

12. Tooling and Frameworks

12.1 Social Engineer Toolkit (SET)

Repository: github.com/trustedsec/social-engineer-toolkit Author: TrustedSec (David Kennedy) Stars: 14.7k | Language: Python License: Open source penetration testing framework [CONFIRMED]

Key Modules

Module Purpose
Spear-Phishing Attack Vector Craft targeted phishing emails with malicious payloads
Website Attack Vector Clone websites for credential harvesting, Java applet attacks, HTA attacks
Infectious Media Generator Create autorun payloads for USB/CD/DVD
Mass Mailer High-volume email campaigns
SMS Spoofing Send spoofed SMS messages
Wireless Access Point Create rogue wireless access points
QR Code Generator Generate QR codes pointing to attack infrastructure
PowerShell Attack Vectors PowerShell-based payload delivery
Third-Party Modules Extensible module system

Installation: Linux native, Mac OS X, Windows WSL/WSL2 [CONFIRMED]

12.2 GoPhish

Repository: github.com/gophish/gophish Language: Go (25.5% JS for frontend) License: MIT [CONFIRMED]

Architecture

GoPhish Server
├── Campaign Management    — Multi-campaign orchestration
├── Email Templates        — HTML email builder with tracking
├── Landing Pages          — Clone sites or build custom pages
├── User Groups            — Target list management
├── Sending Profiles       — SMTP configuration
├── Webhooks               — Event-driven notifications
├── Reporting Dashboard    — Campaign analytics and metrics
└── API                    — Full REST API for automation

Campaign Design Best Practices

  1. Baseline assessment: Run initial campaign before any training
  2. Realistic pretexts: Use scenarios relevant to the organization (not generic "Nigerian prince")
  3. Progressive difficulty: Start with obvious, increase sophistication
  4. Immediate training: Redirect users who click to educational content
  5. Metrics tracking: Click rate, credential submission rate, report rate
  6. No punishment: Awareness campaigns should educate, not shame
  7. Frequency: Quarterly minimum, monthly preferred
  8. Variety: Rotate vectors (email, SMS, voice) and pretexts

Deployment Options

  • Binary releases (Windows, Mac, Linux)
  • Docker container
  • Source compilation (Go v1.10+)

12.3 CredSniper

Repository: github.com/Raikia/CredSniper (Note: repository may be archived/unavailable)

Capabilities:

  • Real-time credential phishing with 2FA token capture
  • Modular design supporting multiple platforms (Google, Facebook)
  • API for integration with other tools
  • Captures credentials AND one-time passwords in real-time before they expire

12.4 Additional Tools

Tool Purpose Note
Evilginx2 Reverse proxy phishing with session token capture Bypasses MFA
Modlishka Reverse proxy for real-time credential/token capture Automated 2FA bypass
King Phisher Phishing campaign toolkit Campaign management
Gophish Open-source phishing framework Industry standard
Lucy Commercial security awareness platform Enterprise-grade
Cofense (PhishMe) Commercial phishing simulation Enterprise SAT

13. MITRE ATT&CK Mapping

13.1 Reconnaissance (TA0043)

Technique ID Description
Phishing for Information T1598 SE campaigns to extract data (not deploy malware)
↳ Spearphishing Service T1598.001 Via third-party services (LinkedIn, etc.)
↳ Spearphishing Attachment T1598.002 Malicious attachment to extract info
↳ Spearphishing Link T1598.003 Malicious link to harvest credentials
↳ Spearphishing Voice T1598.004 Voice-based phishing for information

Mitigations: SPF/DKIM/DMARC (M1054), User Training (M1017) [CONFIRMED — MITRE]

13.2 Initial Access (TA0001)

Technique ID Description
Phishing T1566 SE to gain initial system access
↳ Spearphishing Attachment T1566.001 Malicious file attachments
↳ Spearphishing Link T1566.002 Malicious URLs
↳ Spearphishing via Service T1566.003 Via social media/messaging
↳ Spearphishing Voice T1566.004 Voice phishing for access
Drive-by Compromise T1189 Watering hole attacks

Mitigations: AV/AM (M1049), Audit (M1047), NIPS (M1031), Web Content Restriction (M1021), Anti-spoofing (M1054), User Training (M1017) [CONFIRMED — MITRE]

13.3 Execution (TA0002)

Technique ID Description
User Execution T1204 Tricking users into executing malicious content
↳ Malicious Link T1204.001 User clicks malicious URL
↳ Malicious File T1204.002 User opens malicious file
↳ Malicious Image T1204.003 User deploys malicious container image
↳ Malicious Copy and Paste T1204.004 User pastes malicious code (clipboard hijacking)
↳ Malicious Library T1204.005 User installs malicious package/library

Key procedure examples:

  • LAPSUS$: Recruited employees/contractors to provide credentials and approve MFA prompts, or install remote management software [CONFIRMED — MITRE]
  • Lumma Stealer: Fake CAPTCHA instructing victims to paste Base64-encoded PowerShell into Windows Run [CONFIRMED — MITRE]
  • Scattered Spider: Impersonated IT/helpdesk to instruct victims to execute remote access tools [CONFIRMED — MITRE]

13.4 Threat Actor SE Profiles

Actor Primary SE Vector Notable Technique
APT28 (Fancy Bear) Spearphishing, watering holes Custom exploit kits, credential harvesting
Kimsuky Spearphishing emails Tailored emails, contact list harvesting
Scattered Spider Vishing, smishing, MFA fatigue Helpdesk impersonation, SIM swapping
LAPSUS$ Insider recruitment, MFA prompt bombing Employee/contractor bribery
Lazarus Group Spearphishing, watering holes Fake job offers, supply chain
APT32 (OceanLotus) Watering holes, spearphishing Compromised legitimate sites

14. Defense: Security Awareness Program Design

14.1 Program Architecture

SECURITY AWARENESS PROGRAM
├── Governance
│   ├── Executive sponsorship
│   ├── Policy framework
│   ├── Budget and resources
│   └── Metrics and reporting
├── Training Delivery
│   ├── Onboarding training (mandatory)
│   ├── Annual refresher (mandatory)
│   ├── Role-specific training (targeted)
│   ├── Just-in-time training (post-failure)
│   └── Continuous micro-learning
├── Phishing Simulation
│   ├── Baseline assessment
│   ├── Regular campaigns (monthly+)
│   ├── Progressive difficulty
│   ├── Multi-vector (email, SMS, voice)
│   └── Metrics tracking
├── Culture Building
│   ├── Reporting mechanisms (easy, rewarded)
│   ├── Security champions program
│   ├── Positive reinforcement (not punishment)
│   └── Leadership modeling
└── Measurement
    ├── Phish click rate (trending down)
    ├── Report rate (trending up)
    ├── Time to report (trending down)
    ├── Training completion rate
    └── Incident reduction correlation

14.2 Email Security Technical Controls

SPF (Sender Policy Framework)

  • DNS TXT record specifying authorized mail servers
  • Prevents simple domain spoofing
  • Limitation: does not protect against display name spoofing or cousin domains

DKIM (DomainKeys Identified Mail)

  • Cryptographic signature on email headers and body
  • Validates message integrity and sender domain
  • Limitation: does not prevent spoofing of the visible "From" field

DMARC (Domain-based Message Authentication, Reporting & Conformance)

  • Policy layer on top of SPF + DKIM
  • Specifies handling of failed authentication (none → quarantine → reject)
  • Provides reporting on authentication failures
  • Critical: Set policy to p=reject for maximum protection
_dmarc.example.com TXT "v=DMARC1; p=reject; rua=mailto:dmarc@example.com; pct=100"

[CONFIRMED — MITRE T1598 Mitigations, M1054]

Additional Email Controls

  • BIMI (Brand Indicators for Message Identification): Verified sender logos
  • MTA-STS: Enforces TLS for mail transport
  • Email gateway filtering: Sandboxing, URL rewriting, attachment analysis
  • Banner warnings: External email banners, first-time sender alerts
  • Link protection: URL rewriting and time-of-click analysis

14.3 Phishing Simulation Best Practices

Campaign Design Principles

  1. Realistic and relevant: Pretexts should match what real attackers would use against your org
  2. Progressive difficulty model:
    • Level 1: Generic mass phishing (obvious indicators)
    • Level 2: Brand impersonation with minor flaws
    • Level 3: Spear phishing with OSINT personalization
    • Level 4: Thread hijacking / compromised vendor simulation
    • Level 5: Multi-vector (email + vishing followup)
  3. Immediate education: Users who fail see educational content explaining what they missed
  4. Positive metrics: Track and celebrate reporting rates, not just failure rates
  5. No gotcha culture: Shaming reduces reporting; education increases resilience
  6. Diverse pretexts: Rotate through authority, urgency, curiosity, fear, reward
  7. Timing variation: Different days, times, contexts to avoid pattern recognition

Key Metrics

Metric Target Meaning
Click rate <5% Percentage who click malicious links
Credential submission rate <2% Percentage who enter credentials
Report rate >70% Percentage who report to security team
Time to first report <5 min Speed of threat identification
Repeat failure rate <3% Users who fail multiple consecutive tests

14.4 Vishing Simulation Program

  1. Scope: Define which departments, what information is in-scope
  2. Scripts: Develop realistic vishing scripts based on threat intelligence
  3. Operators: Use trained operators (internal or contracted)
  4. Recording: Record calls (with legal approval) for training material
  5. Metrics: Track compliance rate, information disclosed, verification attempts
  6. Training: Provide immediate feedback and coaching to those who fail
  7. Policy: Establish and communicate verification procedures for phone requests

14.5 User Reporting Infrastructure

Requirements:

  • One-click reporting button in email client (Outlook plugin, Gmail add-on)
  • Clear SMS forwarding process (forward to designated short code)
  • Phone reporting hotline for vishing attempts
  • Positive feedback loop: "Thank you for reporting — here's what we found"
  • SOC integration: reported messages feed directly into analysis queue
  • Automated triage: sandbox URLs and attachments from reported messages

14.6 Role-Specific Training

Role Additional Training Focus
Executives Whaling, BEC, deepfake threats, wire fraud
Finance/AP Invoice fraud, payment redirection, W-2 scams
HR Fake job applicants, resume malware, employee data requests
Help Desk Vishing, password reset social engineering, MFA bypass
Developers Supply chain attacks, typosquatting packages, fake job offers
Reception/Facilities Physical SE, tailgating, impersonation, badge requests
Legal/Compliance Pretexting as regulators, fake subpoenas, data disclosure requests

14.7 Incident Response for Social Engineering

SOCIAL ENGINEERING INCIDENT RUNBOOK

TRIAGE (0-15 min)
├── Identify attack vector (phishing/vishing/smishing/physical)
├── Determine scope (who was targeted, who interacted)
├── Assess credential compromise risk
└── Classify severity (credentials entered? malware executed? access granted?)

CONTAINMENT (15-60 min)
├── Reset compromised credentials immediately
├── Revoke active sessions (all devices)
├── Block malicious URLs/domains/IPs at gateway
├── Quarantine related emails across org (clawback)
├── Disable compromised accounts if needed
└── Alert targeted users who haven't interacted yet

INVESTIGATION (1-4 hours)
├── Analyze phishing infrastructure (domain, hosting, kit)
├── Check for lateral movement from compromised accounts
├── Review email gateway logs for campaign scope
├── Search for related IOCs across environment
├── Determine if MFA was bypassed (session tokens?)
└── Timeline reconstruction

ERADICATION
├── Block all identified IOCs
├── Remove persistence (mail rules, OAuth apps, forwarding rules)
├── Patch exploited vulnerabilities
└── Update email filtering rules

RECOVERY
├── Credential reset (forced, not optional)
├── MFA re-enrollment if bypassed
├── User notification and education
└── Monitor for re-compromise (30 days)

POST-INCIDENT
├── After-action report
├── Update detection rules
├── Update phishing simulation scenarios
├── Share anonymized IOCs with industry peers
└── Adjust training based on failure patterns

Appendix A: Quick Reference — SE Attack Decision Matrix

Scenario Recommended Vector Influence Principle MITRE Technique
Credential harvest at scale Email phishing + clone site Urgency + Authority T1566.002
Target specific executive Spear phishing or vishing Authority + Scarcity T1566.001, T1598.004
Bypass MFA Callback phishing + remote access Helpfulness + Authority T1566.004, T1204
Gain physical access Impersonation + tailgating Authority + Liking T1200
Harvest info without malware Vishing + elicitation Reciprocity + Flattery T1598.004
Mass credential harvest Smishing + mobile phishing page Urgency + Fear T1566.002, T1598.003
Bypass email security QR code phishing Curiosity + Authority T1566.002
Target mobile users Smishing Urgency + Scarcity T1566.002
Long-term intelligence Watering hole Trust transfer T1189
High-value wire fraud BEC + deepfake voice Authority + Urgency T1566, T1598.004

Appendix B: Red Team SE Engagement Checklist

PRE-ENGAGEMENT
[ ] Scope and rules of engagement signed
[ ] Legal review completed
[ ] Target list approved
[ ] Communication channels established with client POC
[ ] Data handling procedures agreed (credentials captured, etc.)
[ ] Emergency stop procedure defined

OSINT PHASE
[ ] Organization mapping (org chart, key personnel)
[ ] Technology stack identification
[ ] Email format discovery and verification
[ ] Social media profiling of targets
[ ] Vendor/partner identification
[ ] Physical location reconnaissance
[ ] Data breach exposure check

PRETEXT DEVELOPMENT
[ ] Persona created and documented
[ ] Infrastructure staged (domains, email, phones)
[ ] Props prepared (if physical)
[ ] Scripts developed and rehearsed
[ ] Verification resilience tested
[ ] Escalation paths planned

EXECUTION
[ ] Campaign launched per approved schedule
[ ] Real-time monitoring active
[ ] Evidence collection ongoing (screenshots, recordings)
[ ] Scope boundaries respected
[ ] Emergency stop ready

REPORTING
[ ] Timeline of all activities
[ ] Success/failure metrics
[ ] Evidence catalog
[ ] Vulnerability findings (human and technical)
[ ] Recommendations (training, process, technical controls)
[ ] Executive summary for leadership

End of module. This material is for authorized security training, assessment, and defense purposes.

PreviousPhishing & Social Eng
NextExfiltration & Tunneling

On this page

  • Table of Contents
  • 1. Psychological Foundations: Cialdini's Principles
  • 1.1 The 6+2 Principles of Influence
  • 1.2 Additional Psychological Levers
  • 2. Pretexting Development Methodology
  • 2.1 Research Phase (OSINT Foundation)
  • 2.2 Persona Creation
  • 2.3 Story Development
  • 2.4 Props and Supporting Materials
  • 3. Elicitation Techniques
  • 3.1 Core Elicitation Tactics
  • 3.2 Preloading
  • 3.3 Conversation Flow Model
  • 4. Vishing — Voice Phishing
  • 4.1 Definition and Impact
  • 4.2 Core Vishing Techniques
  • 4.3 Vishing Script Templates
  • 4.4 High-Risk Targets for Vishing
  • 4.5 Notable Vishing Incidents
  • 5. Smishing — SMS Phishing
  • 5.1 Definition and Scale
  • 5.2 Why SMS is Effective
  • 5.3 Smishing Message Templates
  • 5.4 Smishing Infrastructure
  • 5.5 Impact Statistics
  • 6. Email Phishing Taxonomy
  • 6.1 Attack Categories
  • 6.2 Technical Manipulation Techniques
  • 6.3 Phishing Email Anatomy
  • 7. Physical Social Engineering
  • 7.1 Tailgating / Piggybacking
  • 7.2 Badge Cloning
  • 7.3 Dumpster Diving
  • 7.4 Impersonation for Building Access
  • 7.5 USB Drop Attacks
  • 8. Watering Hole Attacks
  • 8.1 MITRE ATT&CK T1189: Drive-by Compromise
  • 8.2 Attack Chain
  • 8.3 Threat Group Usage
  • 8.4 Social Engineering Element
  • 8.5 Mitigations
  • 9. Callback Phishing (BazarCall)
  • 9.1 Concept
  • 9.2 Attack Flow
  • 9.3 Why Callback Phishing is Effective
  • 9.4 Defense
  • 10. QR Code Phishing (Quishing)
  • 10.1 Attack Vectors
  • 10.2 Quishing Campaign Design
  • 10.3 Why Quishing is Growing
  • 10.4 SET Integration
  • 10.5 Defense
  • 11. Deepfake Threats in Social Engineering
  • 11.1 Current State of the Threat
  • 11.2 Deepfake Attack Taxonomy
  • 11.3 Deepfake-Enhanced SE Scenarios
  • 11.4 Detection and Defense
  • 12. Tooling and Frameworks
  • 12.1 Social Engineer Toolkit (SET)
  • 12.2 GoPhish
  • 12.3 CredSniper
  • 12.4 Additional Tools
  • 13. MITRE ATT&CK Mapping
  • 13.1 Reconnaissance (TA0043)
  • 13.2 Initial Access (TA0001)
  • 13.3 Execution (TA0002)
  • 13.4 Threat Actor SE Profiles
  • 14. Defense: Security Awareness Program Design
  • 14.1 Program Architecture
  • 14.2 Email Security Technical Controls
  • 14.3 Phishing Simulation Best Practices
  • 14.4 Vishing Simulation Program
  • 14.5 User Reporting Infrastructure
  • 14.6 Role-Specific Training
  • 14.7 Incident Response for Social Engineering
  • Appendix A: Quick Reference — SE Attack Decision Matrix
  • Appendix B: Red Team SE Engagement Checklist