The Real Threat of Generative AI and How it Will Impact the Channel

We need cybersecurity that leverages generative AI, embeds safety controls and adds watermarks to boost safety.

Adrien Gendre

October 16, 2023

6 Min Read
Generative AI ChatGPT
SomYuZu/Shutterstock

Gendre-Adrien_Vade-author-150x150.jpg

Adrien Gendre

2023 has been the year of generative AI. With forecasts predicting the technology could raise AI’s total economic impact to as much as $25.6 trillion annually, it has captured our attention and imagination. Yet it’s also raised acute concerns for those in the cybersecurity community, specifically in the channel. Experiments have revealed generative AI’s ability to generate error-free phishing kits almost instantaneously across a variety of languages. The rapid advancement of the technology has led many to envision a new kind of threat landscape that is far more malicious and active than before, with targets large and small.

But up until now, these fears have focused on a limited use case of the technology. Content creation — while worth our immediate attention — presents a problem our modern solutions can readily solve. For example, natural language processing (NLP) models can detect and neutralize spear-phishing attacks based on abusive content patterns, including phrasing and flag words. Yet our capabilities fall distinctly short when defending against attacks planned and orchestrated by generative AI.

In this article, we examine the threat of generative AI through a new lens, the three essential measures we must take now to protect our cybersecurity future, and how generative AI impacts the channel.

Content Creation Versus Orchestration

Attack orchestration and the lack of SMB and MSP resources to fight off sophisticated attacks is the real threat of generative AI.

Today, attacks require a team of dedicated hackers who possess a wide and diverse skill set. Hackers need talent for coding, networking and discovering vulnerabilities in an environment. They also need extensive training on the unique solutions used by their intended targets, such as endpoint detection and response (EDR) solutions. And most importantly, they need time — often weeks or months — before they can compromise an organization or environment.

This slow and time-intensive effort gives organizations the forensic evidence needed for detection, as well as sufficient time to effectively respond. Using AI-powered threat detection and response solutions, they can identify suspicious activity, raise alerts and deploy effective incident response — assuming they have the capabilities to do so.

Yet the balance of power swiftly changes when generative AI enters the equation. Imagine the technology designing attack scenarios and executing them on the fly. Initially, our solutions block them, yet generative AI learns from its failures and strikes again. Unlike cyberwarfare under the present model, a successful attack doesn’t take weeks or months, but seconds or minutes. This is where it becomes nearly impossible for the small-to-midsize businesses (SMBs) that the channel largely serves to fight back. SMBs often have little to no resources to detect attacks, let alone respond to them — and that’s before generative AI even enters the equation. Add it in, and fighting off cyberattacks becomes extremely difficult, if not impossible, for the SMB.

The gravest threat of generative AI is when it enhances the most impactful and difficult aspects of life as a cybercriminal. The threat is real and imminent, which is why we must act now. And action means concentrating on three important measures.

  1. Adopt cybersecurity solutions leveraging generative AI. How do we defend ourselves against the malicious use of generative AI? By fighting fire with fire?

To protect ourselves, we must…

adopt cybersecurity solutions that leverage generative AI. This means using technology that can automatically design a defense strategy and make decisions in real-time. While this kind of solution doesn’t exist today, development is underway. Once it becomes available, the channel will need to readily embrace it and develop practices for speaking to buyers about the importance of using generative AI to their benefit before it can become their downfall.

  1. Design a new kind of social graph. A social graph is a defense measure that detects anomalies and spoofing attempts by understanding the relationships and habits of communication within an organization. Currently, social graphs operate using information that generative AI can access. Through an exhaustive search on the internet, AI can easily extrapolate or learn the news, communication patterns and existing relationships of a specific company. As a result, it can acquire the context it needs to design highly sophisticated and convincing attacks.

To fortify our defenses, we must design social graphs that base detection on data that AI can’t access. That calls for using computed data, rather than collected data, to feed our AI models such as NLP. This will make it much harder for generative AI to guess and exploit our communication patterns, while ensuring we can detect spoofing attempts. And while this type of social graph doesn’t exist today, it too is imminent and solutions that use updated graph models will excel in the channel marketplace.

  1. Implement safety controls and watermarking. While the first two solutions won’t happen overnight, one action can. Generative AI creators must embed safety controls in their technology — and do so immediately. There’s no excuse for producing these innovations without measures to address ethical issues.

This isn’t to say generative AI creators haven’t introduced controls. They have, but they’re limited in two important ways. First, controls are implemented on a use-case basis. Second and most importantly, these controls are embedded in the software that interfaces with users, not the AI itself.

Another issue is that, currently, AI-produced content doesn’t display a watermark to identify the source of a creation. Watermarks are essential to differentiating between artificial and human producers. We have all seen how phishers are taking advantage of generative AI tools to author malicious emails that don’t have the telltale signs of phishing emails (incorrect spelling, grammar, etc.). Phishers aren’t just targeting Fortune 500 companies, despite what major news outlet headlines may say. SMBs and MSPs are being targeted with these generative AI-developed emails, and adding watermarks is just one step that can be taken to keep these organizations safe.

We need generative AI creators to embed safety controls and core watermarks in their technology — not tomorrow, but today. And for enforcement, we need international laws governing these practices.

Generative AI: Creating a Safer Frontier

To be clear, I’m not standing in front of progress. Just the opposite. We should do everything in our power to continue innovating and developing our capabilities in this area. Research has shown that channel marketers can benefit significantly from using generative AI, and it even allows MSPs and SMBs to automate tasks, free up personnel, analyze customer data and behavior to tailor solutions, and more. We can and should embrace this technology in the channel where it makes sense, but we should also do everything we can to ensure our progress today doesn’t jeopardize our future.

Now is not the time for a wait-and-see approach. We need action — and from all corners of our global technology community.

Adrien Gendre is co-founder and chief technology and product officer at Vade, a predictive email defense company. A speaker at M3AAWG (Messaging, Malware & Mobile Anti-Abuse Working Group), he shares his expertise to educate businesses about email threats. He earned his executive MBA at HEC Paris. You may follow him @VadeSecure on X or on LinkedIn.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like