Black Hat USA: Cybersecurity Experts Optimistic About Generative AI

Cybersecurity as an industry is likely going to be the biggest benefactor of AI.

Edward Gately, Senior News Editor

August 9, 2023

6 Min Read
Generative AI Panel at Black Hat USA 2023

BLACK HAT USA — A panel of cybersecurity experts from Amazon Web Services (AWS), Barracuda, Splunk and more agreed they are optimistic about the future of generative AI in spite of increasing threats.

The panel took place Tuesday at this week’s Black Hat USA. Panelists shared their thoughts on how AI is impacting cybersecurity, including how it is enhancing both phishing techniques and defenses, AI’s impact on ransomware as a whole, and practical uses in threat research.

Panelists included:

  • Fleming Shi, Barracuda‘s CTO

  • Mark Ryland, director of the office of the CISO at AWS

  • Dr. Amit Elazari, co-founder and CEO of OpenPolicy, and cyber professor at UC Berkeley

  • Patrick Coughlin, global vice president of security markets at Splunk

  • Michael Daniel, president and CEO of the Cyber Threat Alliance, and former Cyber Czar for the Obama Administration

Shi said he’s definitely hopeful about the future of generative AI because “we have the opportunity to escalate to the point where we can use policies to drive a better behavior of how we actually improve cybersecurity awareness training early on for humans.”

“If we put the right posture in place, get humans up to the speed and for the future as well, then that will win the battle,” he said. “One use case that I would like to talk about is just-in-time training, using generative AI and with the right prompts and the right type of data to make sure you can make it personalized so the training can be actually more informative and more attractive. How many of us actually
love cybersecurity training? You make it more personable, especially with kids. They can actually learn from it. And when they walk into their first job, they’ll be ready to go. That’s my hope.”

Generative AI an ‘Amplifier’

Ryland said he’s hopeful because generative AI is an “amplifier.”

“So there will be new things, new attacks and new risks,” he said. “But I think that the combination of generative AI with formal verification or with expert-based rule systems together is a super powerful combination. It’s the combination of the code generation with an encoding of safety as more of a traditional rules kind of system that is already today making developers way more productive. And I think
that’s maybe a microcosm, but I think it can be generalized pretty broadly.”

Daniel said overall, he’s optimistic as well, “which may sound strange coming from the former cybersecurity coordinator of the United States.”

“I actually think the tools that we’re talking about have enormous potential to make the practice of cybersecurity much more satisfying for a lot of people,” he said. “For example, it can take a lot of the alert fatigue out of the equation and actually make it much easier for the humans to focus on the stuff that’s actually interesting. So I have a lot of hope that we can use these tools to make the practice of cybersecurity a more engaging discipline. Yes, we could go down a stupid path and have it actually block entry, but I think if we use it right, we can actually expand the pool and sort of think of it as AI as copilot, as an assistant, then it actually starts amplifying what the humans can do. And I think there’s just a tremendous, tremendous potential there.”

Generative AI Democratizes Information

Elazari said she’s “very hopeful and optimistic” because generative AI has the power to do three things that result in the democratization of many things. Those include …

… increasing access to information, increasing the usability of information, and increasing the efficiency and effectiveness in what you can do with the information.

Cybersecurity as an industry is probably going to be the biggest benefactor of AI, Coughlin said.

“When you look at the challenges that we’re trying to solve, just the challenges of wrangling massive amounts of data, the challenge of finding needles in haystacks, this is what we talk about, what we do,” he said. “The white space between detection and response, for God’s sakes, you can drive a truck through it still. And so when I see the potential for AI, it’s almost like a turbo-charged life raft for the cybersecurity industry that’s going to help us keep up with the bad guys. And without it, I’m not sure we’re going to be able to throw enough bodies at it — and I think we’ve proved that. So I’m incredibly hopeful about what it means for the cybersecurity product space and the enterprise value capture that will happen there.”

Coughlin is more pessimistic and concerned about needed regulation.

“I worry about the average age of our policymakers and our regulators to be able to keep up with that,” he said. “But thank God there are people like Dr. Amit who are hopefully helping us with that. But that gives me concern because I haven’t seen our ability in the regulatory space and the government, and even in the United States, we’re behind the EU already. And so I’m concerned that we’re going to need to pick up the pace in a way that we haven’t seen before.”

Panel Mixed on ChatGPT Bans

The panelists also had differing views on BlackBerry‘s new research showing 75% of organizations worldwide are currently implementing or considering bans on ChatGPT and other generative AI applications within the workplace.

Elazari opposes bans and said there’s better ways to ensure safe usage.

“We are living in a very competitive environment,” she said. “It’s a competition between organizations and who’s the most innovative. It’s a competition between geopolitical forces, and it’s also a competition between the adversaries and everyone else. And in that environment, if you are not leveraging the most innovative technology, if you’re opting out of … or choosing to not engage with the technology, you are allowing the other players that are leveraging this very effective tool to stay ahead. I think a better solution would be better policies and mitigation of risk instead of a 0-1 approach.”

Ryland said ChatGPT is essentially consumer technology that is designed for mass adoption, with no charge or low charge, “and so when you hear a statistic like corporations banning use, the analogy should be that they ban the use of Gmail for corporate work.”

“I mean, that’s I think the proper analogy in that there’s nothing wrong with Gmail,” he said. “It’s perfectly secure. But I don’t want to run my corporation on email that’s essentially sold to consumers in which the monetization strategy is a consumer-monetization strategy.”

Banning doesn’t work because when you ban something, people still use it, Shi said.

“If you enable it safely, it’s way better than banning,” he said.

Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn.

About the Author(s)

Edward Gately

Senior News Editor, Channel Futures

As news editor, Edward Gately covers cybersecurity, new channel programs and program changes, M&A and other IT channel trends. Prior to Informa, he spent 26 years as a newspaper journalist in Texas, Louisiana and Arizona.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like