Adversarial Attacks on AI Malware Detection

August 2022 - May 2023

Overview

During this two-semester-long capstone project, I worked with a group of five other undergraduates and a professor in computer engineering. We developed a tool to assess the robustness of AI security systems against adversarial attacks. The tool was developed for those researching machine-learning solutions to malware attacks. It helps test, evaluate, and strengthen malware detection software.

The way robustness was measured was by its ability to detect microarchitecture attacks specially designed to evade detection. The software generates these evasive adversarial attacks by inserting artificial noise into the attack instructions to mimic benign power signatures and exploit the security system’s underlying machine-learning model.

Capstone Overview Diagram

To achieve the project goals, we had to first profile benign, malicious, and individual x86 instruction power signatures. Then we developed instruction insertion logic that took the attack structure and the effects on the signature into account to create these adversarial examples. Finally, we created a method to test and evaluate the model against these generated attacks.

Capstone Software Diagram

My Role

I was responsible for leading the backend development of the project. I conducted experiments, developed testing systems, created the instruction insertion logic, and helped decide the project's direction.

Results

Early on in the project, we successfully were able to create adversarial examples to fool the detection models. The challenge then became to optimize ways to alter power signatures and help create a more resilient model. We developed better data collection methods that were used to create a better model, discovered insights to alter power signatures, and created software for rapid testing.

Skills Developed

C
Python
x86 Assembly
Machine Learning
Reverse Engineering
Malware Analysis
Leadership
Project Management

Links

sdmay23-16 • Robustness of Microarchitecture Attacks/Malware Detection Tools against Adversarial Artificial Intelligence Attacks
Iowa State's senior design project page
favicon
https://sdmay23-16.sd.ece.iastate.edu/
sdmay23-16
GitHub Repo
GitHub Avatar
GitHub Avator
https://github.com/liama28/sdmay23-16
MAD-EN: Microarchitectural Attack Detection through System-wide Energy Consumption
Research paper the project extends upon
Adversarial Attacks on AI Malware Detection