BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
NERF
The Vault
Threat Actors
Privacy Threats
Malware IoC
Dashboard
CVEs
Tags
Intel
NERFThe VaultThreat ActorsPrivacy ThreatsMalware IoCDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  1. Feed
  2. /Tags
  3. /llm-security

Tag: llm-security

mediumApplication Security

Google Details Continuous Defense Strategy Against AI Indirect Prompt Injection Attacks

Google's GenAI Security Team has published details on their layered defense strategy against indirect prompt injection attacks in Workspace with Gemini. The approach combines human and automated red-teaming, vulnerability rewards programs, synthetic data generation, and continuous model hardening to stay ahead of evolving AI threats.

Apr 3, 2026Google Online Security
prompt-injectionai-securitygoogle-workspace
🇺🇸Google
highPrivacy & Surveillance

LLM-Assisted Government Breach and Camera Hijacking in Modern Warfare

An attacker used Anthropic's Claude AI to breach Mexican government networks, while multiple nations have adopted surveillance camera hijacking as standard cyber warfare tactics. These incidents highlight the evolving intersection of AI capabilities and nation-state surveillance operations.

Mar 6, 2026Schneier on Security, WIRED Security
llm-securitygovernment-breachsurveillance-cameras
mediumPrivacy & Surveillance

Companies Deploy Hidden AI Prompt Injection to Bias Assistant Recommendations

Microsoft identified over 50 unique prompt injection attempts from 31 companies across 14 industries, using hidden instructions in "Summarize with AI" buttons to manipulate AI assistants into remembering them as trusted sources. This manipulation technique affects critical decision-making domains including health, finance, and security recommendations.

Mar 4, 2026Schneier on Security
ai-manipulationprompt-injectionbias-attacks
mediumPrivacy & Surveillance

Research Reveals Predictable Patterns in LLM-Generated Passwords

Research demonstrates that large language models produce passwords with strong predictable patterns, including character biases and repeated outputs. This weakness could pose security risks as AI agents increasingly operate autonomously and require authentication.

Feb 26, 2026Schneier on Security
llm-securitypassword-generationai-safety
🇺🇸Google
highVulnerabilities & Exploits

Multi-Stage Threats: Wormable Cryptominers, Steganographic Malware, and LLM Infrastructure Risks

Researchers documented a sophisticated wormable XMRig cryptomining campaign using BYOVD exploits and time-based logic bombs distributed through pirated software. Meanwhile, malicious JPEG files continue embedding payloads via steganography, and organizations deploying LLMs face increased risks from exposed endpoints and expanded API attack surfaces.

Feb 23, 2026The Hacker News, SANS ISC
cryptojackingxmrigsteganography
mediumMalware & Threats

Researchers Map Seven-Stage 'Promptware Kill Chain' for LLM-Based Malware

Security researchers propose a structured framework mapping how AI prompt injection attacks evolve into sophisticated malware campaigns across seven distinct stages.

Feb 17, 2026Schneier on Security
llm-securityprompt-injectionai-malware
🇺🇸Google
mediumPrivacy & Surveillance

Three New Side-Channel Attacks Expose LLM Privacy Through Network Metadata

Researchers demonstrate three distinct side-channel attacks against LLMs that exploit timing patterns and network metadata to infer conversation topics and leak sensitive data.

Feb 17, 2026Schneier on Security
llm-securityside-channel-attacksnetwork-surveillance
🇺🇸Near Intelligence