Sponsored By

Netenrich Tracks Emergence of FraudGPT AI Bot that Accelerates Cyberattacks

Generative AI tools give cybercriminals the ability to operate at greater speed and scale.

Edward Gately

July 25, 2023

3 Min Read
AI bot and cybersecurity
Blue Planet Studio/Shutterstock

Netenrich has discovered the emergence of FraudGPT, an artificial intelligence (AI) bot that helps cybercriminals launch business email compromise (BEC) phishing campaigns on organizations.

Netenrich calls FraudGPT the “villain avatar” of ChatGPT. The AI bot can craft spear phishing emails, create cracking tools and more.

The tool is selling on various dark web marketplaces and the Telegram platform. It has been circulating on Telegram Channels since July 22.

AI Bot Features

The subscription fee for FraudGPT starts at $200 per month and goes up to $1,700 per year. Features include writing malicious code; creating undetectable malware, phishing pages and hacking tools; finding groups, sites and markets; writing scam pages and letters; and finding leaks and vulnerabilities.

John Bambenek, principal threat hunter at Netenrich, said the AI bot appears to be among the first inclinations that threat actors are building generative AI features into their tooling.

Bambenek-Jon_Netenrich-2023.jpg

Netenrich’s Jon Bambenek

“Prior to this, our discussion of the threat landscape has been theoretical,” he said. “That said, just because tools exist, doesn’t mean they’ll get traction among cybercriminals … so we’ll need to see how and where we see these tools used.”

FraudGPT Among Early Stage Efforts of Using AI for Cybercrime

Generative AI tools provide criminals the same core functions they provide technology professionals, the ability to operate at greater speed and scale, Bambenek said. Attackers can now generate phishing campaigns quickly and launch more simultaneously.

“I view this as early-stage efforts in the use of AI for criminal activity much like organizations across industry verticals are playing to see how ChatGPT can be used,” he said. “The core problem is that AI will help radically increase the scale and efficiency of attackers in ways we are not entirely ready to combat. We have some time, just not a lot, to come up with solutions. That said, 20 years in this industry have taught me that we’ll always be playing catch-up to the criminals who often use cutting-edge technology better and faster than we do, and certainly faster than we can address the risks.”

At its core, AI-enabled phishing is still phishing, so reputational systems looking at senders still work, Bambenek said. Phishing is also the start of an attack, not the final step, so everything an attacker does after a successful phishing campaign is still the same and can be detected with existing technologies.

“What makes ChatGPT compelling is the same reason that FraudGPT and the like is compelling,” he said. “It won’t radically change how phishing looks; it simply makes the criminal able to generate an order of magnitude greater quantity of attacks. Attackers pay attention to what we talk about and are concerned about, so it should be no surprise that they are integrating the tools we are concerned about. Ultimately, the limit of the use of AI in cyber crime is a limit on the imagination of the criminals.”

Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn.

Read more about:

MSPsVARs/SIs

About the Author(s)

Edward Gately

Senior News Editor, Channel Futures

As news editor, Edward Gately covers cybersecurity, new channel programs and program changes, M&A and other IT channel trends. Prior to Informa, he spent 26 years as a newspaper journalist in Texas, Louisiana and Arizona.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like