Adversarial Machine Learning

Overview

SemesterWinter 2022
Course typeBlock Seminar
LecturerJun.-Prof. Dr. Wressnegger
AudienceInformatik Master & Bachelor
Credits4 ECTS
Room148, Building 50.34
LanguageEnglish or German
Linkhttps://campus.kit.edu/campus/all/event.asp?gguid=0x1D232F8B626A4FC88FA59E823A2B29DF
Registrationhttps://ilias.studium.kit.edu/goto.php?target=crs_1922846&client_id=produktiv

Description

This seminar is concerned with different aspects of adversarial machine learning. Next to the use of machine learning for security, also the security of machine learning algorithms is essential in practice. For a long time, machine learning has not considered worst-case scenarios and corner cases as those exploited by an adversarial nowadays.

The module introduces students to the recently extremely active field of attacks against machine learning and teaches them to work up results from recent research. To this end, the students will read up on a sub-field, prepare a seminar report, and present their work at the end of the term to their colleagues.

Topics include but are not limited to adversarial examples, model stealing, membership inferences, poisoning attacks, and defenses against such threats.

Schedule

DateStep
Tue, 25. Oct, 9:45–11:15Primer on academic writing, assignment of topics
Thu, 3. NovArrange appointment with assistant
Mon, 7. Nov - Fri, 11. Nov1st individual meeting (First overview, ToC)
Mo, 5. Dec - Fri, 9. Dec2nd individual meeting (Feedback on first draft of the report)
Thu, 22. DecSubmit final paper
Mon, 9. JanSubmit review for fellow students
Thu, 12. JanEnd of discussion phase
Fri, 13. JanNotification about paper acceptance/rejection
Fri, 27. JanSubmit camera-ready version of your paper
Fri, 17. FebPresentation at final colloquium

Mailing List

News about the seminar, potential updates to the schedule, and additional material are distributed using a separate mailing list. Moreover, the list enables students to discuss topics of the seminar.

You can subscribe here.

Topics

Every student may choose one of the following topics. For each of these, we additionally provide a few recent top-tier publication that you should use as a starting point for your own research. For the seminar and your final report, you should not merely summarize that paper, but try to go beyond and arrive at your own conclusions.

Moreover, most of these papers come with open-source implementations. Play around with these and include the lessons learned in your report.

  • Adversarial Training

    • Madry et al., "Towards deep learning models resistant to adversarial attacks", ICLR 2018
    • Wang et al., "Improving adversarial robustness requires revisiting misclassified examples", ICLR 2020

  • Certified Robustness

    • Li et al., "SoK: Certified Robustness for Deep Neural Networks", IEEE S&P 2023

  • Differentially Private ML

    • McKenna et al., "Graphical-model based estimation and inference for differential privacy", ICML 2019
    • Zhang et al., "PrivSyn: Differentially Private Data Synthesis", USENIX Security 2022

  • Anti-Backdoor Learning

    • Li et al., "Anti-Backdoor Learning: Training Clean Models on Poisoned Data", NeurIPS 2021
    • Huang et al., "Backdoor Defense via Decoupling the Training Process", ICLR 2022

  • Measuring Adversarial Robustness

    • Zhang et al., "Attacks Which Do Not Kill Training Make Adversarial Learning Stronger", ICML 2020
    • Tian et al., "Analysis and Applications of Class-wise Robustness in Adversarial Training", KDD 2021

  • Defense using Synthetic Data

    • Sehwag et al., "Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?", ICLR 2022
    • Gowal et al., "Improving Robustness using Generated Data", NeurIPS 2021

  • Attacks against Federated Learning

    • Bagdasaryan et al., "How To Backdoor Federated Learning", AISTATS 2020
    • Zhang et al., "Neurotoxin: Durable Backdoors in Federated Learning", ICML 2022

  • Defenses to Attacks against Federated Learning

    • Nguyen et al., "FLAME: Taming Backdoors in Federated Learning", USENIX Security 2022
    • Rieger et al., "DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection", NDSS 2022

  • Membership Inference Attacks

    • Li and Zhang, "Membership Leakage in Label-Only Exposures", CCS 2021
    • Choquette-Choo et al. "Label-Only Membership Inference Attacks", ICML 2019
    • Jalalzai et al., "Membership Inference Attacks via Adversarial Examples", CoRR 2022

  • Defenses against MI Attacks

    • Jia et al., "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples", CCS 2019
    • Rahimian et al., "Differential Privacy Defenses and Sampling Attacks for Membership Inference", AIsec 2021