ChatGPT Provides Easier Entry Into Cybercrime
CF: Does ChatGPT make it even easier to become a cybercriminal?
DS: It is going to make it easier. What we haven’t seen yet is just how much the bar actually lowers. But again, we should consider ChatGPT as a large demo experiment. This is a beta. It’s going to continue to get better. So as you think about how generative AI in the hands of a threat actor continues to lower the bar, ChatGPT already is going to do a good job the first phase of an attack, generating those phishing emails and those social engineering campaigns. In the near future, we’re going to have a better version of generative AI, but ChatGPT is already making it easier to develop malicious code.
One of the other things that it’s going to start lowering the bar on is getting past security defenses. So even in the ransomware kits that are provided to help lower the bar and make it easier for the ransomware providers essentially to enable more people to perform ransomware attacks, there are still certain things you need to do to customize to get past defenders’ tools and capabilities. And one of those is taking a ransomware sample or even command-and-control malware, and packaging it and changing it so that it’s not detected by your endpoint detection and response (EDR) or your antivirus solutions. Well, when ChatGPT or whatever the next version of generative AI is understands code better, it’s not a matter of taking something that’s already made and trying to put layers to obfuscate it. You can take something that’s made, deconstruct it into its original pieces and rearrange it and compile it, and that’s going to be a whole other level of difficulty for defender tools to pick up on. So this improvement to obfuscation and packaging malware is really the next thing that I’m really concerned about because I’m not sure that we’re ready for it.