Helping Cybercriminals
CF: On the flip side, can you talk a little bit about what sorts of cyber risks are associated with ChatGPT?
RL: All the advantages that I just said for the good guys, the bad guys will have, too. In fact, one positive is these contract-for-hire cyber gangs, the ransomware gangs and whatnot, maybe some of those people will get put out of work because these AIs can help. But the reality is, it will make it a lot easier for anyone from a script kit and beginner hacker, to the most sophisticated person, to do a lot of the things you need to do to be successful.
So a big one is phishing. It will give you perfect English for emails and text messages. It can also be used to impersonate people really down to the mannerism, the tone you have. If you give it a few examples of the person you’re trying to impersonate; it will make a very realistic-looking message. And that’s, of course, the biggest risk, today anyway, the human factor and doing social engineering.
It will help you code malware, and help you to code different things to crack into firewalls and different types of IT infrastructure. There’s hope that OpenAI does have some ability to filter what results come back. So maybe ChatGPT itself will not long term be used for this type of stuff. But there are lots of workarounds; there are lots of ways to hack it. And it’s creating a whole other category of risk that firms need to be very aware of, not just ChatGPT, but the AI that can be used against them, as well as their own AI being spoofed in ways that traditional IT technology has not been. You can do some things with AI where you can get it to reveal things that maybe it shouldn’t have revealed. And so those type of use cases also need to start being monitored and thought about on the defense side of the house.