BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
Threat Actors
Privacy Threats
Dashboard
CVEs
Tags
Intel
Threat ActorsPrivacy ThreatsDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
β€’
Β© 2026
β€’
blacktemple.net
  1. Feed
  2. /Three New Side-Channel Attacks Expose LLM Privacy Through Network Metadata

Three New Side-Channel Attacks Expose LLM Privacy Through Network Metadata

February 17, 2026Privacy & Surveillance2 min readmedium

Originally reported by Schneier on Security

#llm-security#side-channel-attacks#network-surveillance#timing-attacks#speculative-decoding#metadata-leakage
Share

TL;DR

Researchers demonstrate three distinct side-channel attacks against LLMs that exploit timing patterns and network metadata to infer conversation topics and leak sensitive data.

Why medium?

Three research papers demonstrate side-channel attacks achieving up to 98% accuracy in inferring LLM conversation topics through network metadata analysis. Significant privacy implications for all LLM users but requires network-level access for exploitation.

The Attack Vectors

Researchers have identified three distinct side-channel attack classes that compromise LLM privacy despite TLS encryption protecting message content.

Remote Timing Attacks on Efficient Inference

The first attack exploits timing variations introduced by efficiency optimizations like speculative sampling and parallel decoding. By monitoring encrypted network traffic between users and remote LLMs, attackers can:

  • Classify conversation topics (medical advice vs. coding assistance) with 90%+ precision on open-source systems
  • Distinguish between specific messages on production systems including OpenAI's ChatGPT and Anthropic's Claude
  • Extract PII (phone numbers, credit card numbers) through active boosting attacks on open-source implementations

Speculative Decoding Side Channels

The second attack targets speculative decoding mechanisms that generate and verify multiple candidate tokens in parallel. Researchers demonstrated that input-dependent patterns of correct and incorrect speculations leak through:

  • Per-iteration token counts
  • Packet size variations
  • Timing patterns

Testing across four speculative-decoding schemes (REST, LADE, BiLD, EAGLE) using vLLM serving frameworks, the attack achieved:

  • Over 75% query fingerprinting accuracy at temperature 0.3
  • REST scheme vulnerability: 100% accuracy at low temperature, 99.6% at temperature 1.0
  • Confidential datastore content extraction at rates exceeding 25 tokens/second

Whisper Leak: Streaming Response Pattern Analysis

The third attack, dubbed "Whisper Leak," analyzes packet size and timing patterns in streaming responses to classify user prompt topics. Testing across 28 popular LLMs from major providers revealed:

  • Near-perfect classification performance (often >98% AUPRC)
  • High precision even at extreme class imbalance ratios (10,000:1 noise-to-target)
  • 100% precision identifying sensitive topics like "money laundering" while recovering 5-20% of target conversations

Industry-Wide Implications

These vulnerabilities affect LLMs deployed across sensitive domains including healthcare, legal services, and confidential communications. The attacks pose particular risks for users under network surveillance by:

  • Internet Service Providers
  • Government entities
  • Local network adversaries

Mitigation Strategies

Researchers evaluated three defensive approaches:

  • Random padding: Adds noise to packet sizes
  • Token batching: Aggregates tokens across iterations
  • Packet injection: Introduces dummy traffic

While each mitigation reduces attack effectiveness, none provides complete protection against all three attack vectors. The research teams have engaged in responsible disclosure with LLM providers to implement initial countermeasures.

Sources

  • Side-Channel Attacks Against LLMs - Schneier on Security

Originally reported by Schneier on Security

Tags

#llm-security#side-channel-attacks#network-surveillance#timing-attacks#speculative-decoding#metadata-leakage

Tracked Companies

πŸ‡ΊπŸ‡ΈNear Intelligence

Related Intelligence

  • Companies Deploy Hidden AI Prompt Injection to Bias Assistant Recommendations

    mediumMar 4, 2026
  • Congress Demands TEMPEST Investigation as 80-Year-Old Side-Channel Attacks Threaten Modern Systems

    mediumMar 4, 2026
  • Research Reveals Predictable Patterns in LLM-Generated Passwords

    mediumFeb 26, 2026

Explore

  • Dashboard
  • Privacy Threats
  • Threat Actors
← Back to the feed

Previous Article

← Welcome to the Black Temple Security Feed

Next Article

Threat Roundup: Zero-Days, Data Breaches, and Evolving Attack Vectors β†’