Free Newsletters for the Channel
Register for Your Free Newsletter Now
July 25, 2023
Netenrich has discovered the emergence of FraudGPT, an artificial intelligence (AI) bot that helps cybercriminals launch business email compromise (BEC) phishing campaigns on organizations.
Netenrich calls FraudGPT the “villain avatar” of ChatGPT. The AI bot can craft spear phishing emails, create cracking tools and more.
The tool is selling on various dark web marketplaces and the Telegram platform. It has been circulating on Telegram Channels since July 22.
The subscription fee for FraudGPT starts at $200 per month and goes up to $1,700 per year. Features include writing malicious code; creating undetectable malware, phishing pages and hacking tools; finding groups, sites and markets; writing scam pages and letters; and finding leaks and vulnerabilities.
John Bambenek, principal threat hunter at Netenrich, said the AI bot appears to be among the first inclinations that threat actors are building generative AI features into their tooling.
Netenrich’s Jon Bambenek
“Prior to this, our discussion of the threat landscape has been theoretical,” he said. “That said, just because tools exist, doesn’t mean they’ll get traction among cybercriminals … so we’ll need to see how and where we see these tools used.”
Generative AI tools provide criminals the same core functions they provide technology professionals, the ability to operate at greater speed and scale, Bambenek said. Attackers can now generate phishing campaigns quickly and launch more simultaneously.
“I view this as early-stage efforts in the use of AI for criminal activity much like organizations across industry verticals are playing to see how ChatGPT can be used,” he said. “The core problem is that AI will help radically increase the scale and efficiency of attackers in ways we are not entirely ready to combat. We have some time, just not a lot, to come up with solutions. That said, 20 years in this industry have taught me that we’ll always be playing catch-up to the criminals who often use cutting-edge technology better and faster than we do, and certainly faster than we can address the risks.”
At its core, AI-enabled phishing is still phishing, so reputational systems looking at senders still work, Bambenek said. Phishing is also the start of an attack, not the final step, so everything an attacker does after a successful phishing campaign is still the same and can be detected with existing technologies.
“What makes ChatGPT compelling is the same reason that FraudGPT and the like is compelling,” he said. “It won’t radically change how phishing looks; it simply makes the criminal able to generate an order of magnitude greater quantity of attacks. Attackers pay attention to what we talk about and are concerned about, so it should be no surprise that they are integrating the tools we are concerned about. Ultimately, the limit of the use of AI in cyber crime is a limit on the imagination of the criminals.”
You May Also Like
Viirtue, MSP Partners Seek Larger Piece of IT PieFeb 29, 2024
New Cisco OT Route to Market Opens New Partner SetFeb 29, 2024
Broadcom-VMware Saga Update: Nutanix Wins, Carbon Black Sale, Hock Tan PayFeb 29, 2024
Zero Trust World: ThreatLocker Unleashes New Tools to Stop ThreatsFeb 27, 2024