Study: AI fuels spam and scams, not advanced hacking
Analysis of 97,895 cybercrime forum threads finds generative AI mainly used to produce SEO spam, romance scams and AI-generated nude images, not advanced malware.
Researchers at Cambridge, Edinburgh and Strathclyde analyzed 97,895 threads from underground and dark web forums posted after ChatGPT debuted in November 2022. The team used the Cambridge Cybercrime Centre’s CrimeBB dataset, ran topic models, manually read more than 3,200 threads and conducted ethnographic immersion.
The study found 97.3% of threads were classified as “other,” meaning they did not discuss using generative AI for crime, while 1.9% involved discussion of so-called vibe coding tools. Forum discussion of high-profile “Dark AI” chatbots produced attention but often described the services as marketing or unreliable. One operator wrote that their product was “nothing more than an unrestricted ChatGPT” before shutting the project down.
By late 2024, the paper reports, jailbreaks for mainstream commercial models often stopped working within days. Open-source models could be jailbroken indefinitely but were described in forum posts as slow, resource intensive and frozen at older versions. The authors note that model guardrails limited some criminal use.
Where generative AI appeared to have measurable impact was in low-skill, high-volume fraud. The researchers documented SEO spammers using language models to mass-produce blog posts to chase ad revenue. Romance scammers and eWhoring operators combined image generation and voice cloning to make fake profiles and sell images. The study includes an advertisement for AI nude generation that offered one image for $1, 10 for $8, 50 for $40 and 90 for $75. Other recorded uses included AI-written eBooks sold for a fee and phishing kits assembled with AI assistance.
The authors report that many forum users treat AI coding assistants as autocomplete tools similar to public developer help sites, while less experienced actors prefer pre-made scripts. A forum post read, “AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities.” Another user wrote, “It’s clear now that using AI for code causes a very fast negative degradation of your skills.”
The paper notes growing concern about labor market disruption from AI. The authors suggest layoffs and a cooling job market could push some developers from legitimate roles into underground communities that run fraud and other schemes.
The study, titled “Stand-Alone Complex or Vibercrime?”, appears on arXiv.
The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.








