What MSPs Should Know about ChatGPT

Cybercriminals are already trying to leverage AI-based tools in their attacks, so security professionals need to be prepared.

4 Min Read
what MSPs should know about ChatGPT
Getty Images

The AI-based natural language processing tool ChatGPT has been a hot topic online and in the press, and it has inspired numerous debates about its potential uses —and abuses. For example, it can help streamline computer code writing and generate legitimate-sounding term papers and marketing copy. This poses ethical dilemmas (will students use it to cheat?) and is causing panic among copywriters across various industries who fear being replaced by an AI algorithm.

But a more immediate ChatGPT-related threat will likely come from cybercriminals. Anyone familiar with common cyberattacks would immediately recognize the utility of a convincing copy generator for creating credible phishing emails and other content that could make it easier to launch business email compromise (BEC) and other attacks.

Efficient, Yet Dangerous in the Wrong Hands

While ChatGPT does include internal guardrails to keep criminals from directly using it to create scam emails (and other types of objectionable content), several companies and researchers have found ways to rephrase their requests to create such emails. This capability will make experienced hackers more efficient and lower the entry threshold for newbies who could create more effective phishing campaigns using the platform. (A writer at Forbes even got ChatGPT to explain its cybersecurity risks.)

Additionally, an AI tool in the wrong hands could even be used to enable realistic-sounding online conversations via email or messenger. For instance, the algorithm could be trained to mimic a typical exchange or even be trained to create responses that look like those issued by specific people (for example, a company executive or finance officer). AI could also be used as malicious customer service chatbots that trick users into giving up personal or financial information.

If an attacker had already gained access to an email account, they could use the text from the compromised account to train a model to write emails in the style of a specific user. In these scenarios, even if the potential victim of a scam asks a question, ChatGPT can provide plausible answers and conduct an entire conversation professionally. For example, a bot could be created to mimic the CEO of a company to trick the CFO into making a financial transfer. In that case, there would not be any way (outside of a confirmation phone call) to verify their identity.

One of the most prominent commercial use cases for such tools is generating marketing content since that type of content usually follows simple templates and is not very novel. And phishing emails are, essentially, marketing messages from criminals. ChatGPT could craft thousands of variations of the same phishing email to avoid detection — in any language. Security tools that rely on natural language processing (like traditional spam filters) would be vulnerable to this type of manipulation.

It could also replicate an existing, legitimate website or login page.

Malicious Coding Made Easy

There are other ways criminals can leverage the technology. For example, hackers could use ChatGPT or a similar tool to generate malicious code. There are already malware-as-a-service offerings on the internet; with an AI-based tool, even relatively inexperienced hackers could generate complex code. In addition, there is already evidence of some bad actors using ChatGPT to create malicious Java scripts and Python-based malware.

More importantly, the underlying technology could be pulled into an open-source environment, allowing criminals to create a custom-built guardrail-free version of the AI tool for cyberattacks.

It is not a question of if but when cybercriminals begin leveraging these AI-based capabilities in their attacks. Therefore, security professionals must be prepared to train users to spot more convincing phishing emails and deploy AI-based security solutions to help identify and mitigate these attacks faster.

The good news is that ChatGPT’s ability to write malicious code is marred by the same glitches in its ability to generate other types of content: It is often close but flawed in ways that a careful reader can quickly spot. Some requests for written content, for example, can result in “sort-of correct” results but occasionally descend into a word salad. In addition, the malicious code researchers have been able to coax out of the program is often so obviously malicious that it would be easily detected by security software.

But AI-based platforms evolve, and the platform could conceivably learn to create more effective code or more convincing phishing emails over time. Therefore, security teams must be ready with their own advanced AI-based security tools (to help spot cleverly crafted attacks) and double down on end user education to help spot AI-generated phishing scams. In addition, security protocols should be put in place for operations like financial transfers that require phone confirmations, in-person interactions or multiple signoffs.

ChatGPT and similar tools are going to help cybercriminals be more effective. As a result, security approaches must evolve to meet this challenge and incorporate technology and human intervention to counteract these increasingly sophisticated attacks.

 

 Asaf Cidon is an assistant professor of electrical engineering and computer science at Columbia University and a Barracuda adviser.

 

This guest blog is part of a Channel Futures sponsorship.

Read more about:

MSPs
Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like