Adversarial AI Attacks
Adversarial artificial intelligence (AI) will mainstream as increased adoption of AI and machine learning (ML) models continues to take hold across various industries, Plaggemier said.
“In the coming year, we’re likely to see cyber adversaries using AI and ML models to create attacks that can self propagate across a network or exploit vectors in data sets used to model ML frameworks,” she said. “First, AI algorithms can be trained on manipulated or fake data, known as poisoned data, which can cause the AI to make incorrect decisions or take malicious actions. Additionally, attackers can create adversarial examples, which are inputs designed to fool an AI system into making an incorrect decision. Another way that adversarial AI can be vulnerable to cyberattacks is through the use of AI algorithms to manipulate and deceive individuals. This could involve creating fake social media profiles or websites that appear legitimate, but are actually designed to collect sensitive information or spread malware.”
Adversarial AI will also likely be used to enhance and continue existing attacks, such as disrupting critical infrastructure like power grids or transportation systems, Plaggemier said. The ability of AI algorithms to learn and adapt makes them particularly well-suited for this type of attack, and the potential consequences of such an attack could be devastating.
“Finally, adversarial AI attacks will be used and incorporated in quantum computing breaches,” she said. “This is because the speed and power of quantum-computing make them a prime target because they can manipulate and deceive AI systems. This can lead to critical infrastructure disruptions and the undermining of trust in AI systems that need to be trusted by the public at large to achieve their full potential.”