Generating Malware
CF: Can ChatGPT also generate malware?
DS: ChatGPT doesn’t understand that it’s producing malware. But it has access to a large amount of information about code. And so you can ask it, hey, how can I encrypt a file, how can I run a file in the background without seeing a pop-up to a user, or how do I control this program so that it requires admin permissions and you start slowly piecing together the pieces for a ransomware. File encryption, admin permissions, and all of that code is out there.
There are legitimate uses for that code chat, but ChatGPT doesn’t actually understand that it’s building something malicious. The good news right now is ChatGPT doesn’t write good code. It still requires someone with some technical knowledge to help put the pieces together properly and make sure that it’s not buggy. One of the things that you’ll see on the darknet forums is them talking a lot about how to get around some of the buggy things that ChatGPT does when it generates code. That is a short-term problem. Generative AI advancements are going to continue to focus on code because of the movement around low-code and no-code within the industry. And when you think about it, generative AI, the barrier to creation is about having large data sets. And GitHub is just sitting there as a large data set of a whole bunch of code that can be used to train a model that will do a better job of writing code and more importantly, a better job of writing malware. And of course, the end impact for us is that lowers the barrier of entry for cybercrime.