Blog

Hackers & Artificial Intelligence: A Dynamic Duo

Hackers & Artificial Intelligence: A Dynamic Duo

To best defend against an AI attack, security teams will need to adopt the mindset and techniques of a malicious actor.

The amplified efficiency of artificial intelligence (AI) means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor. Given sufficient computing power, an AI system could launch many attacks, be more selective in its targets and more devastating in its impact. The potential mass destruction makes a nuclear explosion sound rather limited.

Currently the use of AI for attackers is mainly pursued at an academic level and we’re yet to see AI attacks in the wild. However, there’s a lot of talk in the industry about attackers using AI in their malicious efforts, and defenders using machine learning as a defense technology.

There are three types of attacks in which an attacker can use AI:

AI-based cyberattacks: The malware operates AI algorithms as an integral part of its business logic. This is where AI algorithms are used to detect anomalies and indicate irregular user and system activity patterns. The AI algorithm is trained to identify unusual patterns indicative of malicious activity that can be used to execute malware, increase or decrease evasion and stealth configurations and communication times. An example of this is DeepLocker, demonstrated by IBM security which encrypted ransomware to autonomously decide which computer to attack based on a face recognition algorithm.

AI facilitated cyberattacks: The malicious code and malware running on the victim’s machine does not include AI algorithms, but the AI is used elsewhere in the attacker’s environment. An example of this is Info-stealer malware which uploads a lot of personal information to the C&C server, which then runs an NLP algorithm to cluster and classify sensitive information as interesting (e.g. credit card numbers). Another example of this is spear fishing where an email is sent with a façade the looks legitimate by collecting and using information specifically relevant to the target. 

Adversarial attacks: The use of malicious AI algorithms to subvert the functionality of benign AI algorithms. This is done by using the algorithms and techniques that are built into a traditional machine learning algorithm and “breaking” it by reverse engineering the algorithm. Skylight Cyber recently demonstrated an example of this when they were able to trick Cylance’s AI based antivirus product into detecting a malicious file as benign.

The constructive AI versus malicious AI trend will continue to increase and spread across the opaque border that separates academic proof of concepts from actual full-scale attacks in the wild. This will happen incrementally as computing power (GPUs) and deep learning algorithms become more and more available to the wider public.

To best defend against an AI attack, you need to adopt the mindset and techniques of a malicious actor. Machine learning and deep learning experts need to be familiar with these techniques in order to build robust systems that will defend against them. 

Copyright © 1996 - 2023 ZOOM CyberSense. All Rights Reserved.