,

Adversarial Learning: Robust and Reliable Machine Learning Models

Kia Ora! I am Luke Chang and I am passionate about building more reliable machine learning models, an artificial intelligence people can trust.

I started my machine learning journey by designing a transmission controller for an engine in my third-year undergraduate project. I was fascinated by how artificial intelligence can be integrated into everyday systems. When I learnt a state-of-the-art machine learning model is vulnerable to malicious attacks, even when trained on a large amount of data, I made my decision to contribute to solving this problem.

Can you imagine the consequences of a self-driving vehicle wrongly identifying a stop sign as a speed limit sign? The attacker can deceive the computer just by adding minor changes to the stop sign. This type of attack is called an adversarial attack. This example highlights the potential threat of adversarial attacks. So far, there is no perfect solution for defending against adversarial examples, and there is no machine learning model which is immune to attacks.

My research is focusing on understanding why and how this behaviour exists. After investigating the relationship between adversarial examples and classification models, I noticed most of the adversarial examples exist in three regions in the hyper-space. First, as an outlier, where an unconstrained input is not compliant with the intended use case of the model. Second, it appears close to the decision boundary of the model. Third, it exists in a “pocket” formed by the decision boundary, which means the attack is disguised as the model is overfitting the training data.

I developed a novel multi-stage defensive framework to check these characteristics. This framework shows promising results on a wide range of machine learning models and different types of data. My goal is to make machine learning models more resilient to attacks, and I believe my research will have a significant impact on people’s daily life.

Building a reproducible experiment isn’t easy, it becomes harder when there are many moving parts involved. Preprocessing the data, choosing the best machine learning model, executing attacks to break the model, and then finding a way to block the most vicious attack. I encountered a complexity I have never seen before. It’s challenging, but I think it is very rewarding.

To my younger self: If it’s too easy to achieve, then it’s probably not worth pursuing. Just don’t give up.

2024

Kim, Jonathan; Urschler, Martin; Riddle, Pat; Wicker, Jörg

Attacking the Loop: Adversarial Attacks on Graph-based Loop Closure Detection Proceedings Article

In: Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 90-97, 2024.

Abstract | Links | BibTeX

2023

Chang, Xinglong; Dost, Katharina; Dobbie, Gillian; Wicker, Jörg

Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks Unpublished Forthcoming

Forthcoming.

Abstract | Links | BibTeX

Chang, Xinglong; Dobbie, Gillian; Wicker, Jörg

Fast Adversarial Label-Flipping Attack on Tabular Data Unpublished Forthcoming

Forthcoming.

Abstract | Links | BibTeX

Chang, Luke; Dost, Katharina; Zhai, Kaiqi; Demontis, Ambra; Roli, Fabio; Dobbie, Gillian; Wicker, Jörg

BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability Proceedings Article

In: Kashima, Hisashi; Ide, Tsuyoshi; Peng, Wen-Chih (Ed.): The 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), pp. 3-14, Springer Nature Switzerland, Cham, 2023, ISSN: 978-3-031-33374-3.

Abstract | Links | BibTeX

Chen, Zeyu; Dost, Katharina; Zhu, Xuan; Chang, Xinglong; Dobbie, Gillian; Wicker, Jörg

Targeted Attacks on Time Series Forecasting Proceedings Article

In: Kashima, Hisashi; Ide, Tsuyoshi; Peng, Wen-Chih (Ed.): The 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), pp. 314-327, Springer Nature Switzerland, Cham, 2023, ISSN: 978-3-031-33383-5.

Abstract | Links | BibTeX

2017

Wicker, Jörg; Kramer, Stefan

The Best Privacy Defense is a Good Privacy Offense: Obfuscating a Search Engine User’s Profile Journal Article

In: Data Mining and Knowledge Discovery, vol. 31, no. 5, pp. 1419-1443, 2017, ISSN: 1573-756X.

Abstract | Links | BibTeX