Threat actors, inspired by WormGPT, are promoting a new generative cybercrime AI tool called FraudGPT. This new tool is currently being circulated across the dark web and on certain telegram channels.

This has been a constant threat ever since ChatGPT’s API has gone live and is available to the public, with members of the dark web taking advantage of vulnerabilities within ChatGPT. With the continuing rise of ai tools and the number of people understanding ai in general set to increase dramatically over the next few years, we are bound to see more such ‘cybercrime tools’ such as FraudGPT become more common.

“This is an AI bot, exclusively targeted for offensive purposes, such as crafting phishing emails, creating cracking tools etc”.

Rakesh Krishnan – Security Researcher

It is said that the tool has been offered for a monthly subscription cost of $200 (or $1700 yearly) and has been circulating since 22nd July 2023.

The tool can be used to write malicious code, create undetectable malware, and find leaks and vulnerabilities. With over 3,000 confirmed sales and potentially many more unconfirmed, it raises significant security risks and increases the odds of people getting targeted and attacked by this next generation of phishing scams.

The exact large language model (LLM) used to develop this tool is currently unknown.

To further go into detail on the type of attacks and abilities this malicious tool has, it can do the following:

  1. Phishing Scams: FraudGPT can generate authentic-looking phishing emails or text messages that will trick the user into revealing sensitive information. Such information can include login credentials, financial details or even personal data.
  2. Social Engineering: This form of chatbot can almost perfectly imitate human conversation which can allow it to build trust against any unsuspecting human. With this, the human can unknowingly divulge and give away sensitive or private information, or potentially perform harmful actions.
  3. Malware Distribution: FraudGPT can create incredibly deceptive messages luring users to click on dangerous links or download harmful attachments. Already quite a significant issue in the workforce, with tools such as this circulating and becoming more common, we might see more businesses being impacted by malware.
  4. Fraudulent Activities: The AI-powered Chatbot can work in conjunction with hackers to create fraudulent documents, or payment requests, potentially leading individuals and businesses alike to fall for such financial scams.

How To Stay Safe From FraudGPT?

Phishing attempts are nearly as old as the internet itself, but protecting yourself against an upgraded evil can prove challenging. If you are already committed to healthy habits such as not clicking on suspicious-looking links in spam emails, updating your web browser and operating system frequently to prevent any vulnerabilities that can be exploited.

Be vigilant always, and ensure you are always protective of private information, just like you are with your wallet in a crowded place. Two-factor authentication when possible also lowers your chance of being attacked dramatically, so get on board with this if you haven’t already.

The Future of AI Attacks?

Without a doubt, the risk of these new malicious tools being spread around the dark web and eventually on the normal world wide web means that we must all be incredibly careful with how we guard our information and systems.

As we have already seen that many experts have said that ai has the potential to be incredibly dangerous and even with the military testing ai applications who knows what the next tool that the dark web will cook up?

We must continue working on working on safety frameworks for AI, but also make sure that the current tools available are not too restrictive to the point where people will go out and create malicious tools such as FraudGPT. If we do not solve this issue relatively soon, we can suspect that fraud gpt will be docile in comparison to the new tools that might be released in the future.