Explanation-Aware Backdoors

Abstract

Explainable machine learning holds great potential for analyzing and understanding learning-based systems. These methods can, however, be manipulated to present unfaithful explanations, giving rise to powerful and stealthy adversaries. In this paper, we demonstrate how to fully disguise the adversarial operation of a machine learning model. Similar to neural backdoors, we change the model’s prediction upon trigger presence but simultaneously fool an explanation method that is applied post-hoc for analysis. This enables an adversary to hide the presence of the trigger or point the explanation to entirely different portions of the input, throwing a red herring. We analyze different manifestations of these explanation-aware backdoors for gradient- and propagation-based explanation methods in the image domain, before we resume to conduct a red-herring attack against malware classification.

For further details please consult the conference publication.

Team

Proof-of-Concept Implementations

For the sake of reproducibility and to foster future research, we make the implementations of our explanation-aware backdoors available at:
https://github.com/intellisec/xai-backdoors

Publication

A detailed description of our work was presented at the 44th IEEE Symposium on Security and Privacy (IEEE S&P 2023) in May 2023. If you would like to cite our work, please use the reference as provided below:

@InProceedings{Noppel2023Disguising,
author =    {Maximilian Noppel and Lukas Peter and Christian Wressnegger},
title =     {Disguising Attacks with Explanation-Aware Backdoors},
booktitle = {Proc. of 44th IEEE Symposium on Security and Privacy (S&P)},
year =      2023,
month =     may
}
A preprint of the paper is available here and here (arXiv)

Teaservideo

The presentation slides are available here.

Related Publications

In addition to the original work, we provide a short version and a poster version on this topic:

Poster: Fooling XAI with Explanation-Aware Backdoors.
Maximilian Noppel and Christian Wressnegger.
Proc. of 30th ACM Conference on Computer and Communications Security (CCS), to appear November 2023.

@InProceedings{Noppel2023Fooling,
author =    {Maximilian Noppel and Christian Wressnegger},
title =     {Poster: Fooling XAI with Explanation-Aware Backdoors},
booktitle = {Proc. of 30th ACM Conference on Computer and Communications Security ({CCS})},
year =      2023,
month =     november
}

Explanation-Aware Backdoors in a Nutshell.
Maximilian Noppel and Christian Wressnegger.
Proc. of 46th German Conference on Artificial Intelligence (KI), September 2023.

@InProceedings{Noppel2023Explanation,
author =    {Maximilian Noppel and Christian Wressnegger},
title =     {Explanation-Aware Backdoors in a Nutshell},
booktitle = {Proc. of 46th German Conference on Artificial Intelligence ({KI})},
year =      2023,
month =     september
}