Non-Uniform Adversarially Robust Pruning

Abstract

Neural networks often are highly redundant and can thus be effectively compressed to a fraction of their initial size using model pruning techniques without harming the overall prediction accuracy. Additionally, pruned networks need to maintain robustness against attacks such as adversarial examples. Recent research on combining all these objectives has shown significant advances using uniform compression strategies, that is, parameters are compressed equally according to a preset compression ratio. With this project, we show that employing non-uniform compression strategies allows to improve clean data accuracy as well as adversarial robustness under high overall compression—in particular using channel pruning. We leverage reinforcement learning for finding an optimal trade-off and demonstrate that the resulting compression strategy can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.

Team

Proof-of-Concept Implementations

For the sake of reproducibility and to foster future research, we make the implementations of Heracles for generating non-uniform pruning strategies publicly available at:

https://github.com/intellisec/heracles

Publication

A detailed description of our work will been presented at the (AutoML 2022) in July 2022. If you would like to cite our work, please use the reference as provided below:

@InProceedings{Zhao2022Heracles,
author    = {Qi Zhao and Tim Königl and Christian Wressnegger},
title     = {Non-Uniform Adversarially Robust Pruning},
booktitle = {Proc. of the International Conference on Automated
Machine Learning ({AutoML})},
year      = 2022,
month     = jul
}

A preprint of the paper is available here.