WormGPT: Risks, Uses, and How Cybersecurity Firms Can Help
- DigiSOC
- Jul 7
- 3 min read
In recent years, artificial intelligence has transformed the way businesses protect their infrastructure: proactive detection, automated response, and continuous monitoring… But just as organizations have bolstered their defenses with AI, attackers have also begun to use it to their advantage.
One of the most alarming tools to emerge recently is WormGPT, a language model specifically designed for malicious activities.
In June 2023, a name began circulating on underground dark web forums: WormGPT. Touted as "the ChatGPT of criminals," this AI-powered tool unleashed a new era of digital threats. But what is it really? And why does it worry cybersecurity experts so much?
What is WormGPT?
WormGPT was launched in March 2021, and it wasn't until June that the developer began selling access to the platform on a popular hacker forum. The hacker chatbot has no restrictions preventing it from answering questions about illegal activities, unlike conventional LLMs like ChatGPT.

Developed on an open-source architecture like GPT-J, WormGPT is not technically the most powerful model on the market, but its advantage lies in the absence of moral or security limitations.
Generates hyper-realistic phishing in multiple languages.
Write code for ransomware or keyloggers.
Design social engineering strategies for BEC (Business Email Compromise) scams.
Sold as a SaaS service on illegal marketplaces, the developer of this "tool" estimated access to WormGPT at between $70 and $117 USD per month or $646 USD per year.
Another malicious LLM appeared later, in July 2023. The author advertised his product, "FraudGPT", on various Dark Web forums and Telegram channels. This product is estimated to have been advertised since July 22, 2023, as an unrestricted alternative to ChatGPT, pretending to have thousands of sales and verified reviews. The price range is $90–$200 USD for a monthly subscription, $230–$450 USD for a 3-month subscription, $500–$1,000 USD for a half-year subscription, and $800–$1.700 USD for a yearly subscription.

Statistical impact of AI attacks
Between 2023 and 2024, the number of victims in Latin America mentioned on leak sites used for extortion and ransomware grew by 15%. Furthermore, access broker listings increased by nearly 38% (Crowdstrike)
The global cost of data breaches averaged $4.88 million last year, representing a 10% increase and an all-time high (IBM).
40% of all phishing emails targeting businesses are now generated by AI (VIPRE Security Group).
Generative AI will increase losses from deepfakes and other attacks by $32 billion to $40 billion annually by 2027 (Deloitte).
Security measures to prevent cybercrime
In the face of this emerging threat, firewalls and antivirus software are not enough. A comprehensive cyber resilience strategy is required, including:
It's important for both individuals and businesses to be trained to recognize phishing messages and other forms of scams. Awareness is key to identifying suspicious emails and avoiding these scams.
It's recommended to enable two-factor authentication (2FA) for sensitive accounts, as it helps add an extra layer of protection, making it harder for cybercriminals to gain access, even if they manage to obtain credentials.
Many enterprise security solutions offer filters that detect suspicious patterns in emails. Companies can implement these filters to analyze the content of messages and flag those that appear to contain phishing attacks as suspicious.
Protect your organization from future AI-powered cyberattacks
Artificial intelligence has revolutionized the cybersecurity landscape, allowing cybercriminals to design increasingly sophisticated attacks that overcome the barriers of traditional solutions. Experts agree that the volume and complexity of AI-driven threats will continue to grow, and that the only effective response is to adopt an equally intelligent defense.
Regardless of the tools used by attackers, the reality is that artificial intelligence makes it easier for them to create advanced attacks capable of evading conventional security systems. Statistics show a sustained increase in the number and sophistication of these threats, and the cybersecurity community agrees that only defensive AI, capable of anticipating, detecting, and responding in real time, can effectively counter the rise of malicious AI.
If you'd like to learn how DigiSOC can help protect your organization from cyberthreats, request a meeting here:
Comentários