Originally reported by Security Affairs, The Record
TL;DR
CISA warns of a critical unpatched vulnerability in PTC Windchill and FlexPLM software with maximum CVSS score. Russian disinformation campaigns target Baltic states with false drone allegations while adversaries expand phishing operations against TikTok Business accounts.
CISA issued an advisory about a critical vulnerability with CVSS 10.0 in enterprise software with no available patches, representing immediate risk to organizations.
CISA issued an advisory regarding a critical vulnerability tracked as CVE-2026-4681 affecting PTC's Windchill and FlexPLM software platforms. The vulnerability carries a maximum CVSS score of 10.0, indicating the most severe risk level possible.
According to CISA's advisory, no patches are currently available for the affected systems, though the agency has not yet reported active exploitation in the wild. Germany's BSI (Federal Office for Information Security) has also issued warnings about the vulnerability, suggesting potential for imminent exploitation targeting industrial and enterprise environments.
Organizations running PTC Windchill and FlexPLM should implement immediate containment measures pending patch availability.
Latvia's government accused Russia of conducting a coordinated disinformation campaign targeting Baltic states with false claims about Ukrainian drone operations. According to Latvian officials, Russian media outlets and Telegram channels have circulated allegations that Baltic states opened their airspace to Ukrainian drones targeting Russian territory.
Riga strongly denied these allegations, characterizing them as part of a broader Russian information warfare strategy. The campaign represents continued Russian efforts to destabilize regional security narratives and undermine Baltic support for Ukraine.
The disinformation operation follows established patterns of Russian hybrid warfare targeting NATO allies in the Baltic region.
Push Security researchers identified a new wave of Adversary-in-The-Middle (AITM) phishing attacks specifically targeting TikTok for Business accounts. The campaign aims to hijack legitimate business accounts for malvertising operations, extending tactics previously observed in Google-themed phishing campaigns.
The attack infrastructure includes both TikTok and Google-themed fake pages, indicating the threat actors are expanding their targeting beyond single platforms. Successful account compromises enable adversaries to conduct malicious advertising campaigns using legitimate business credentials.
The campaign demonstrates continued evolution in AITM phishing techniques targeting business platforms with significant advertising reach.
A parliamentary report warns that hostile actors are conducting increasingly sophisticated campaigns to interfere in UK democratic processes. The assessment highlights how foreign adversaries exploit divisive political issues to amplify social tensions and influence public debate.
UK authorities are weighing new restrictions on political donations as a countermeasure against hard-to-trace foreign interference operations. The report emphasizes the sustained nature of these campaigns and their growing technical sophistication.
The proposed measures reflect broader Western concerns about foreign interference in electoral processes and political discourse.
The European Parliament rejected an extension of Child Sexual Abuse Material (CSAM) scanning requirements for technology platforms, with 311 members voting against the proposal. The decision came despite support from law enforcement agencies, children's rights organizations, German Chancellor Friedrich Merz, European commissioners, and several major technology companies.
The rejection affects ongoing content moderation requirements that would have allowed continued automated scanning for illegal material across major platforms. The vote represents a significant policy shift regarding automated content detection in the European Union.
A Dutch court threatened Elon Musk's xAI with daily fines of €100,000 ($115,000) if the company fails to prevent its Grok AI system from generating nonconsensual nude images. The ruling addresses concerns about AI-generated explicit content created without subject consent.
The court order requires immediate compliance with content generation restrictions, marking a significant legal precedent for AI content moderation requirements. The decision reflects growing regulatory pressure on AI companies to implement stronger content safeguards.
Originally reported by Security Affairs, The Record