Adversarial Machine Learning

Overview

SemesterWinter 2020
Course typeBlock Seminar
LecturerJun.-Prof. Dr. Wressnegger
AudienceInformatik Master & Bachelor
Credits4 ECTS
Room148, Building 50.34 and online
LanguageEnglish or German
Linkhttps://campus.kit.edu/campus/lecturer/event.asp?gguid=0x1602699F5FBA4AE5AFF64437C3FA2CA2
Registrationhttps://ilias.studium.kit.edu/goto_produktiv_crs_1265039.html

Remote Course

Due to the ongoing COVID-19 pandemic, this course is going to start off remotely, meaning, the kick-off meeting will happen online. The final colloquium, however, will hopefully be an in-person meeting again.

To receive all the necessary information, please subscribe to the mailing list here.

Description

This seminar is concerned with different aspects of adversarial machine learning. Next to the use of machine learning for security, also the security of machine learning algorithms is essential in practice. For a long time, machine learning has not considered worst-case scenarios and corner cases as those exploited by an adversarial nowadays.

The module introduces students to the recently extremely active field of attacks against machine learning and teaches them to work up results from recent research. To this end, the students will read up on a sub-field, prepare a seminar report, and present their work at the end of the term to their colleagues.

Topics include but are not limited to adversarial examples, model stealing, membership inferences, poisoning attacks, and defenses against such threats.

Schedule

DateStep
Tue, 3. Nov, 09:45–11:15Primer on academic writing, assignment of topics
Thu, 12. NovArrange appointment with assistant
Mo, 16. Nov - Fr, 20. NovIndividual meetings with assistant
Wed, 16. DecSubmit final paper
Wed, 20. JanSubmit review for fellow students
Fri, 22. JanEnd of discussion phase
Fri, 29. JanSubmit camera-ready version of your paper
Fri, 12. FebPresentation at final colloquium

Mailing List

News about the seminar, potential updates to the schedule, and additional material are distributed using a separate mailing list. Moreover, the list enables students to discuss topics of the seminar.

You can subscribe here.

Topics

Every student may choose one of the following topics. For each of these, we additionally provide a recent top-tier publication that you should use as a starting point for your own research. For the seminar and your final report, you should not merely summarize that paper, but try to go beyond and arrive at your own conclusions.

Moreover, all of these papers come with open-source implementations. Play around with these and include the lessons learned in your report.

  • Authorship Attribution

    Effective Writing Style Transfer via Combinatorial Paraphrasing, PoPETS 2020

  • Adversarial Preprocessing

    Understanding and Preventing Image-Scaling Attacks in Machine Learning, USENIX Security 2020

  • Deep Fake Detection

    Le­ver­aging Fre­quen­cy Ana­ly­sis for Deep Fake Image Re­co­gni­ti­on, ICML 2020

  • Adversarial Examples in Problem Space

    Intriguing Properties of Adversarial ML Attacks in the Problem Space, IEEE S&P 2020

  • Attacks against Speech Recognition

    Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems, IEEE S&P 2021

  • Model Inference

    Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference, USENIX Security 2020

  • Backdoors and Trapdoors

    Using Honeypots to Catch Adversarial Attacks on Neural Networks, CCS 2020

  • Adversarial Training and Robustness

    Fast is better than free: Revisiting adversarial training, ICLR 2020

Colloquium

The schedule of the final colloquium can be found here.