Summary
Malware and ransomware threats are significant challenges threatening every IT-based infrastructure. Considering the scale of the problem, many AI-based solutions have been developed to detect and mitigate malware threats. These engines are now an integral part of organizations’ cybersecurity technology stack. However, these engines are vulnerable to adversarial learning-based attacks, where an adversary generates payloads to fool the AI engine or attempts to change its learning space to detect malicious payloads as benign. This invention offers a firewall to protect these AI-based malware detection systems against adversarial attacks.
Innovators
Dr. Ali Dehghantanha (adehghan@uoguelph.ca [1]) is a Canada Research Chair in Cybersecurity and Threat Intelligence and an Associate Professor in the University of Guelph, ON, Canada. He is the director of Cyber Science Lab - a research lab dedicated to advance research and training in cybersecurity - and the director and founder of the Master of Cybersecurity and Threat Intelligence program at the University of Guelph.
Dr. Hamed Haddadpajouh (hhaddadp@uoguelph.ca [2]) has a Ph.D. in computational sciences and is a senior researcher at the Cyber Science Lab at the University of Guelph, ON, Canada. Hamed has conducted a wide range of research with industrial partners in Canada focusing on applying machine learning to Cyber Threat Hunting and Cyber Threat Intelligence.
System Overview
Figure 1- The high-level pipeline of AI firewall for preventing and adversarial malware sample and generating new adversarial sample for AI-based malware detection engines
Adversarial generative method
The adversarial generative module generates variants of adversarial malicious samples based on static properties of executable files in its dataset and stores their signatures in an adversarial sample database. This module may receive an external dataset and generate a variety of adversarial examples based on the legitimate samples that eventually may become a good knowledge-based for incoming adversarial attacks.
Figure 2- The high-level pipeline of adversarial generative method for generating adversarial sample based on new datasets and storing the adversarial sample in the adversarial signature database.
Adversarial malware prevention method
This module preserves the robustness of the framework while extracting the signature of detected adversarial samples based on the characteristics of AI-powered malware detection engine it is protecting. The incoming executable files are assessed by their feature vector and matched to the adversarial signature database for detecting exact/similar patterns. The new and unseen patterns are stored in the adversarial signature database for future detection. Hence, legitimate binary files are passed, and adversarial samples are dropped.
Figure 3- The high-level pipeline of adversarial prevention method
Potential Applications
This technology can protect any AI-based malware detection and mitigation system against adversarial threats with wide applications in different domains.
- Protecting endpoint AI-based malware threat detection systems:
The system can be used alongside host-based and endpoint security solutions for detecting adversarial malware examples on the endpoints.
- Protecting AI-based ransomware detection systems:
Not only can ransomware samples be generated using adversarial machine learning techniques, but also ransomware payloads can be inserted into benign executables files. The innovation is capable of protecting AI-based ransomware detection engines against adversarial machine learning attacks.
- Protecting AI-based malware detection systems anywhere in the network:
A malicious executable file can easily spread over an enterprise’s network. The innovation is capable of creating adversarial pattern databases that can be used to generate Indications of Compromises (IoCs) for adversarial malware. These IoCs can be used to enrich network-based security mechanism.
Overall, our methods can be seamlessly integrated with any AI-based malware detection system with very little impact on the system performance and assist in protecting AI engines against adversarial attempts.
Status
- US provisional patent application 63/368,894 file July 20, 2022
- Seeking partner(s) to develop commercial product
Additional Contact:
For licensing inquiries
- Steve De Brabandere (sdebrab@uoguelph.ca [3])
For scientific inquiries
-
Ali Dehghantanha (adehghan@uoguelph.ca [1])
-
Hamed Haddadpajouh (hhaddadp@uoguelph.ca [2])