FraudGPT, WormGPT and the rise of dark LLMs
The internet, a vast and indispensable resource for modern society, has a darker side where malicious activities thrive.
From identity theft to sophisticated malware attacks, cybercriminals keep coming up with new scam methods.
Widely available generative artificial intelligence (AI) tools have now added a new layer of complexity to the cyber security landscape. Staying on top of your online security is more important than ever.
One of the most sinister adaptations of current AI is the creation of “dark LLMs” (large language models).
These uncensored versions of everyday AI systems like ChatGPT are re-engineered for criminal activities. They operate without ethical constraints and with alarming precision and speed.
Cyber criminals deploy dark LLMs to automate and enhance phishing campaigns, create sophisticated malware and generate scam content.
To achieve this, they engage in LLM “jailbreaking” – using prompts to get the model to bypass its built-in safeguards and filters.
For instance, FraudGPT writes malicious code, creates phishing pages and generates undetectable malware. It offers tools for orchestrating diverse cybercrimes, from credit card fraud to digital impersonation.
FraudGPT is advertised on the dark web and the encrypted messaging app Telegram. Its creator openly markets its capabilities, emphasising the model’s criminal focus.
Another version, WormGPT, produces persuasive phishing emails that can trick even vigilant users. Based on the GPT-J model, WormGPT is also used for creating malware and launching “business email compromise” attacks – targeted phishing of specific organizations.
What can we do to protect ourselves?
Despite the looming threats, there is a silver lining. As the challenges have