AI is the latest rage in cybersecurity, but is it living up to expectations?

Pam Baker

May 1, 2019

5 Min Read
Artificial Intelligence

Artificial Intelligence (AI) is all the rage in cybersecurity software. However, when vendors say AI, they usually mean a subset of it. Most commonly they mean machine learning (ML), but sometimes it’s deep learning (DL). The details are confusing, so the marketing folks just call it all AI and leave the inner technical workings shrouded in mystery. But now it’s time to evaluate whether “AI” in cybersecurity is meeting expectations and living up to the marketing buzz.

In order to know whether AI is living up to expectations and delivering the goods as ordered, it’s important to check first and see what you actually have under the hood.

What the Tech Is That?


Carbon Black’s Rick McElroy

“Conceptually, AI is a very sexy topic but the reality is that very little AI is present in most security applications,” says Rick McElroy, head of security strategy at Carbon Black. “Simply rebranding machine learning as AI doesn’t solve the shortcomings of the existing technology.”

Unfortunately, some claims of “AI inside” are more than a rebranding effort and more than a tad misleading.

“As an investor I see this kind of behavior all the time — early-stage companies claim to have AI, but basic diligence proves that to be false. Yet, they’re still able to raise huge amounts of venture capital. A recent VC study (page 99) found that 40% of startups that claimed to have AI were lying,” said William Peteroy, Security CTO at Gigamon, a provider of network security.

The confusion and downright subterfuge of some vendors are off-putting to many buyers and users.

“I’m most excited by some of the newer approaches that are more transparent about how they leverage expert systems which I feel have a stronger probability of driving good outcomes for consumers than machine learning-based approaches,” Peteroy added.

But there is nothing to be gained by judging the real thing by its counterfeits. If you’re not seeing good results from your AI- or ML-based software, perhaps you should first check to see if you bought the actual thing or a pricey knock-off.

“Machine learning/AI technologies have been influencing information security for a long time. Spam detection or preventing fraudulent transactions are just two of many examples of successful AI applications in security,” says Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies.

“Statistical learning and ML models are becoming the cornerstone of a wide range of new security products like next-generation antivirus (NGAV) or next-generation firewall (NGFW) products. The main reason for this is that there is no shortage of data to learn from and statistical models have become tremendously good at learning,” Galloway added.

Remember in your evaluations that not all AI/ML is created alike. Some is brand-new, and some is not. However, the only thing that matters is if it works.

“What we are calling AI today in most solutions is machine learning algorithms invented years ago, and commercially viable today because of the compute resources available now.  There is still a large amount of algorithmic innovation to come,” says Rick Grinnell, Founder and Managing Partner at Glasswing Ventures.


Looking forward to future innovations doesn’t mean that AI and ML are impotent in the present. Indeed, their applications are …

… many and diverse.

“AI is used in cybersecurity to detect threats and identify vulnerabilities using machine learning, neural networks, artificial intelligent algorithms and graph theory. It is applied to all areas of cybersecurity, including the end customer, the security operations center, gateways, millisecond backend data queries, and internal business intelligence and research,” says Dr. Celeste Fralick, Chief Data Scientist at McAfee.

“AI simply enables humans to make better, faster, and smarter decisions than they would alone. It is performing well against expectations, mainly because it is impossible for people alone to efficiently and effectively analyze all the data, unless machines and algorithms support our human efforts and understanding,” Fralick added.

Keep in mind the foundational difference between general AI and ML too. AI makes decisions and takes action on its own, whereas ML provides information to humans who then make decisions and take action. However, ML can be coupled with automation to perform certain security tasks in order to speed problem resolutions and free humans to do more challenging things.

“Cybersecurity talent is at a premium, as teams work to defend their environments. The lack of available talent is requiring teams to develop automation to keep pace. AI has two primary themes of use in cybersecurity to accomplish that. The first is distilling and refining information from existing systems that alert and capture logs. The second is generating novel detection of attacks that previous approaches miss. These two areas are where AI has the ability to provide measurable improvement to how teams secure their environments,” says Dustin Rigg Hillard, CTO of eSentire.


However, cybersecurity teams often struggle to make ML and automation work smoothly enough that they can move on to other tasks.

“Many teams have adopted AI technologies and found that it does not immediately improve outcomes, and in the worst cases can actually add negative workload for teams by creating additional work in understanding output without obvious benefits,” says Hillard.

“Many vendors promote AI as part of their solution, without clear descriptions of how it improves outcomes. This is leading to skepticism in adopting new AI technologies. The security industry needs to do a better job of communicating how AI can improve outcomes for teams, rather than promoting technology for the sake of technology,” Hillard added.

That is a prime opportunity for MSSPs, however. The need is huge for assistance in defining the outcomes security teams need and matching those needs with appropriate AI tools.

There are also the aforementioned questions about the AI/ML tech under the hood of various cybersecurity products and platforms, as well as the skepticism that results.

Asking specific questions will help buyers and MSSPs find the right options.

“Buyers can and should ask how AI is working in the product and what functions it’s performing. They should also ask for quantitative and qualitative comparisons to existing processes as well as what training data has been used to train the algorithms and whether they are supervised or unsupervised,” advises Peteroy.

Read more about:


About the Author(s)

Pam Baker

A prolific writer and analyst, Pam Baker’s published work appears in many leading print and online publications including Security Boulevard, PCMag, Institutional Investor magazine, CIO, TechTarget, and InformationWeek, as well as many others. Her latest book is “Data Divination: Big Data Strategies.” She’s also a popular speaker at technology conferences as well as specialty conferences such as the Excellence in Journalism events and a medical research and healthcare event at the NY Academy of Sciences.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like