Originally reported by Hackread
TL;DR
Security researchers have identified three significant threats this week: a fake ChatGPT ad blocker extension stealing user conversations, North Korean hackers using GitHub to target South Korean companies, and AI firm Mercor confirming a breach linked to a LiteLLM supply chain attack with 4TB of data allegedly stolen.
Multiple active campaigns with confirmed data theft, including a North Korean state-sponsored operation and a supply chain attack affecting an AI firm with 4TB of stolen data.
Security researchers have discovered a malicious Chrome browser extension masquerading as 'ChatGPT Ad Blocker' that was covertly harvesting ChatGPT conversations from unsuspecting users. The fraudulent extension promised an ad-free ChatGPT experience while secretly exfiltrating sensitive user data in the background.
The extension represents a growing trend of threat actors exploiting the popularity of AI tools to distribute malware and steal user data. Browser extensions remain a particularly effective attack vector due to their privileged access to web content and user interactions.
Users should verify extension authenticity through official channels and review permission requests carefully before installation.
FortiGuard Labs researchers have uncovered a sophisticated espionage campaign targeting South Korean companies, with North Korean threat actors abusing GitHub's infrastructure for command and control operations. The campaign demonstrates the advanced persistent threat capabilities of state-sponsored groups in leveraging legitimate platforms to evade detection.
The use of GitHub for malicious purposes highlights the challenge security teams face when threat actors abuse trusted development platforms. Organizations should implement enhanced monitoring for unusual GitHub activity and consider implementing additional controls around code repository access.
AI company Mercor has confirmed a security breach linked to a LiteLLM supply chain attack, with threat actors claiming to have exfiltrated 4TB of sensitive data and gained access to internal systems. The incident underscores the growing risks facing AI companies and the critical importance of securing software supply chains.
Supply chain attacks targeting AI infrastructure represent an emerging threat vector as organizations increasingly rely on third-party AI libraries and frameworks. The breach demonstrates how vulnerabilities in upstream dependencies can cascade into significant data exposure incidents.
Organizations utilizing AI tools should conduct thorough security assessments of their supply chain dependencies and implement monitoring for suspicious activity in AI-related services.
Originally reported by Hackread