August 2022 - May 2023
During this two-semester-long capstone project, I worked with a group of five other undergraduates and a professor in computer engineering. We developed a tool to assess the robustness of AI security systems against adversarial attacks. The tool was developed for those researching machine-learning solutions to malware attacks. It helps test, evaluate, and strengthen malware detection software.
The way robustness was measured was by its ability to detect microarchitecture attacks specially designed to evade detection. The software generates these evasive adversarial attacks by inserting artificial noise into the attack instructions to mimic benign power signatures and exploit the security system’s underlying machine-learning model.
To achieve the project goals, we had to first profile benign, malicious, and individual x86 instruction power signatures. Then we developed instruction insertion logic that took the attack structure and the effects on the signature into account to create these adversarial examples. Finally, we created a method to test and evaluate the model against these generated attacks.
I was responsible for leading the backend development of the project. I conducted experiments, developed testing systems, created the instruction insertion logic, and helped decide the project's direction.
Early on in the project, we successfully were able to create adversarial examples to fool the detection models. The challenge then became to optimize ways to alter power signatures and help create a more resilient model. We developed better data collection methods that were used to create a better model, discovered insights to alter power signatures, and created software for rapid testing.