In the era of artificial intelligence, cybercriminals have found new tools to bolster their malicious activities. The dark web now hosts a diverse range of linguistically designed models tailored explicitly for hacking and cyberattacks, including email breaches, malicious software development, phishing attacks, and more.
Among these models is WormGPT, a language model similar to ChatGPT but lacking the ethical constraints that limit ChatGPT’s engagement in malicious activities, this makes WormGPT a potent tool for hackers, particularly those targeting corporate email systems.
The Rise of AI in Cybercrime:
As the adoption of artificial intelligence capabilities revolutionizes various domains, cybercriminals have wasted no time in leveraging AI’s potential for nefarious purposes. The dark web has become a repository of linguistically designed models explicitly crafted for hacking and cybercrime. These models enable cybercriminals to automate and enhance their attacks, posing a significant threat to organizations and individuals alike.
WormGPT: A Malicious AI Tool:
WormGPT is one such linguistic model akin to ChatGPT, albeit without the ethical restraints that limit ChatGPT’s engagement in malicious activities. This distinction makes WormGPT an effective tool for cybercriminals planning to breach corporate email systems, among other targets.
Exploiting Popularity and Branding:
Cybercriminals often exploit the popularity of well-known products and brands to their advantage, and WormGPT is no exception. Kaspersky experts have discovered suspect websites, online advertisements, and illegitimate channels on platforms like Telegram within the dark web.
These are suspected to be fraudulent phishing sites targeting cybercriminals, offering them fake access to the malicious AI tool, WormGPT, these phishing pages closely resemble typical fraudulent phishing pages but differ in several aspects, including design, pricing, and accepted payment methods. Payment methods range from cryptocurrencies (as originally suggested by WormGPT’s developer) to credit cards and bank transfers.
Offering Trial Versions for a Price:
Furthermore, these suspicious phishing pages offer a trial version of the WormGPT AI tool but withhold access until payment is made.
Kaspersky’s Security Recommendations:
Kaspersky experts recommend the following security measures to safeguard against threats from cybercriminals in the dark web:
- Utilize Kaspersky Digital Footprint Intelligence to help security analysts understand the perspectives of attackers regarding company resources, detect potential attack trends immediately, and raise awareness of current threats, this, in turn, allows you to fine-tune your defenses or take timely preventive actions.
- Choose a trusted endpoint protection solution like Kaspersky Endpoint Security for Business, equipped with behavior analysis capabilities to detect malicious activities and manage abnormal cases, this provides effective protection against both known and unknown threats.
- Consider specialized services to counter targeted attacks on critical assets, Kaspersky Managed Detection and Response can detect and halt intrusion attempts in their early stages, while Kaspersky Incident Response helps you respond to and mitigate the consequences of incidents, identifying compromised assets and fortifying your infrastructure against future attacks.
The dark web’s growing arsenal of AI models tailored for cybercrime poses a significant threat to cybersecurity, it underscores the need for robust cybersecurity solutions and heightened vigilance in the face of evolving threats, organizations must remain proactive in adopting security measures to protect their assets from the ever-innovative tactics of cybercriminals.