BT
Privacy ToolboxJournalProjectsResumeBookmarks
Feed
Privacy Toolbox
Journal
Projects
Resume
Bookmarks
Intel
NERF
The Vault
Threat Actors
Privacy Threats
Malware IoC
Dashboard
CVEs
Tags
Intel
NERFThe VaultThreat ActorsPrivacy ThreatsMalware IoCDashboardCVEsTags

Intel

  • Feed
  • Threat Actors
  • Privacy Threats
  • Dashboard
  • Privacy Toolbox
  • CVEs

Personal

  • Journal
  • Projects

Resources

  • Subscribe
  • Bookmarks
  • Developers
  • Tags
Cybersecurity News & Analysis
github
defconxt
•
© 2026
•
blacktemple.net
  1. Feed
  2. /Tags
  3. /prompt-injection

Tag: prompt-injection

mediumVulnerabilities & Exploits

CNCERT Warns of Security Flaws in OpenClaw AI Agent Platform

China's National Computer Network Emergency Response Technical Team has issued a security warning about OpenClaw, an open-source autonomous AI agent platform. The platform's weak default security configurations create vulnerabilities that could enable prompt injection attacks and data exfiltration.

Mar 15, 2026The Hacker News
ai-securityprompt-injectiondata-exfiltration
🇨🇳WeChat
highNation-State & APT

Tycoon 2FA Platform Disrupted, Russian Messaging App Attacks, AI Security Bypasses

International law enforcement disrupted the Tycoon 2FA phishing-as-a-service platform that targeted over 500,000 organizations monthly. Meanwhile, Dutch intelligence warns of Russian-linked actors targeting encrypted messaging apps used by government officials worldwide.

Mar 10, 2026Security Affairs, Palo Alto Unit 42
phishinglaw-enforcementrussia
🇺🇸Meta Platforms
mediumPrivacy & Surveillance

Companies Deploy Hidden AI Prompt Injection to Bias Assistant Recommendations

Microsoft identified over 50 unique prompt injection attempts from 31 companies across 14 industries, using hidden instructions in "Summarize with AI" buttons to manipulate AI assistants into remembering them as trusted sources. This manipulation technique affects critical decision-making domains including health, finance, and security recommendations.

Mar 4, 2026Schneier on Security
ai-manipulationprompt-injectionbias-attacks
mediumMalware & Threats

Researchers Map Seven-Stage 'Promptware Kill Chain' for LLM-Based Malware

Security researchers propose a structured framework mapping how AI prompt injection attacks evolve into sophisticated malware campaigns across seven distinct stages.

Feb 17, 2026Schneier on Security
llm-securityprompt-injectionai-malware
🇺🇸Google