Skip to main content

Adversarial Machine Learning for Input Feature Obfuscation

Project Member(s): Liu, W.

Funding or Partner Organisation: NSW Department of Industry (NSW Defence Innovation Network)
NSW Department of Industry (NSW Defence Innovation Network)

Start year: 2020

Summary: Adversarial attacks are increasingly observed in many real-life scenarios, where attackers aim to deceive a machine learning (ML) model by subtly changing features of the data. Our recent study shows that making small-magnitude subtle changes to image/text/network data can result in significant testing errors, even for well-trained Deep CNN and DNN models. In this project, we will use our expertise in adversarial machine learning to address the scenario where the adversary is able to launch blackbox attacks (i.e., supply inputs and observe outputs) to the ML models.

FOR Codes: Pattern Recognition and Data Mining, Information Processing Services (incl. Data Entry and Capture), Adversarial machine learning, Emerging Defence Technologies, Information systems, technologies and services not elsewhere classified