Performance Review: AI in Cybersecurity
… many and diverse.
“AI is used in cybersecurity to detect threats and identify vulnerabilities using machine learning, neural networks, artificial intelligent algorithms and graph theory. It is applied to all areas of cybersecurity, including the end customer, the security operations center, gateways, millisecond backend data queries, and internal business intelligence and research,” says Dr. Celeste Fralick, Chief Data Scientist at McAfee.
“AI simply enables humans to make better, faster, and smarter decisions than they would alone. It is performing well against expectations, mainly because it is impossible for people alone to efficiently and effectively analyze all the data, unless machines and algorithms support our human efforts and understanding,” Fralick added.
Keep in mind the foundational difference between general AI and ML too. AI makes decisions and takes action on its own, whereas ML provides information to humans who then make decisions and take action. However, ML can be coupled with automation to perform certain security tasks in order to speed problem resolutions and free humans to do more challenging things.
“Cybersecurity talent is at a premium, as teams work to defend their environments. The lack of available talent is requiring teams to develop automation to keep pace. AI has two primary themes of use in cybersecurity to accomplish that. The first is distilling and refining information from existing systems that alert and capture logs. The second is generating novel detection of attacks that previous approaches miss. These two areas are where AI has the ability to provide measurable improvement to how teams secure their environments,” says Dustin Rigg Hillard, CTO of eSentire.
However, cybersecurity teams often struggle to make ML and automation work smoothly enough that they can move on to other tasks.
“Many teams have adopted AI technologies and found that it does not immediately improve outcomes, and in the worst cases can actually add negative workload for teams by creating additional work in understanding output without obvious benefits,” says Hillard.
“Many vendors promote AI as part of their solution, without clear descriptions of how it improves outcomes. This is leading to skepticism in adopting new AI technologies. The security industry needs to do a better job of communicating how AI can improve outcomes for teams, rather than promoting technology for the sake of technology,” Hillard added.
That is a prime opportunity for MSSPs, however. The need is huge for assistance in defining the outcomes security teams need and matching those needs with appropriate AI tools.
There are also the aforementioned questions about the AI/ML tech under the hood of various cybersecurity products and platforms, as well as the skepticism that results.
Asking specific questions will help buyers and MSSPs find the right options.
“Buyers can and should ask how AI is working in the product and what functions it’s performing. They should also ask for quantitative and qualitative comparisons to existing processes as well as what training data has been used to train the algorithms and whether they are supervised or unsupervised,” advises Peteroy.