Originally reported by WIRED Security
TL;DR
Cybercriminals are recruiting models through Telegram job postings to serve as faces for AI-powered romance scams. The investigation found dozens of channels seeking mostly female models who unknowingly enable fraudsters to create convincing deepfake personas for financial exploitation.
While not a technical vulnerability, this represents a significant evolution in social engineering tactics that could enable widespread financial fraud targeting vulnerable populations.
Cybercriminals have established recruitment pipelines on Telegram to source human faces for AI-powered romance scam operations, according to a WIRED investigation. The research identified dozens of Telegram channels posting job listings for "AI face models," primarily targeting women who likely remain unaware their likenesses will be used to defraud victims.
The job postings advertise positions requiring models to participate in up to 100 video calls per day, suggesting an industrial-scale operation designed to maintain multiple fraudulent relationships simultaneously. The sheer volume indicates these operations have moved far beyond traditional romance scams that relied on stolen photos and limited interaction.
These Telegram-based recruitment channels represent a concerning evolution in social engineering infrastructure. By sourcing willing participants who believe they are taking legitimate modeling work, scammers can create more convincing and interactive personas than those built from stolen social media content.
The use of live models for AI training and real-time interaction addresses a key limitation of previous romance scam operations: the inability to maintain convincing video communications. Traditional scams relied heavily on text-based communication and static images, making them easier for potential victims to identify.
The scale suggested by "100 video calls per day" indicates these operations are designed for mass exploitation rather than targeted, high-value fraud. This volume-based approach suggests the criminals behind these schemes are optimizing for broader victim pools and automated processes.
This approach creates significant challenges for both potential victims and security researchers. Unlike deepfakes generated entirely from existing footage, these operations may involve real-time human interaction combined with AI manipulation, making detection more complex for automated systems and human observers alike.
The recruitment of willing participants also complicates law enforcement efforts, as the models may genuinely believe they are engaged in legitimate work rather than criminal activity.
Originally reported by WIRED Security