Maximilian Noppel

Email
Telephone +49 721 608-46190
Room 163
Address Karlsruhe Institute of Technology
Institute of Information Security and Dependability
Am Fasanengarten 5, Geb. 50.34
76131 Karlsruhe, Germany
Maximilian Noppel

About me ▪️ Latest News ▪️ Publications ▪️ Service ▪️ Teaching ▪️ Sparetime ▪️ Talks

🧔 About me

I am a doctoral researcher in the group of Christian Wressnegger. After my B.Sc. in Computer Science and three years as a Software Engineer and Software Architect for embedded multiprocessor devices, I decided to head back to university. In 2020, I graduated to M.Sc. in Computer Science at the Karlsruhe Institute of Technology (KIT). My studies were concentrated on IT-Security, Cryptography, Anonymity and Privacy, and Algorithm Engineering.

As a doctoral researcher, I now focus on the vulnerabilities of eXplainable Artificial Intelligence (XAI) in adversarial environments. XAI methods augment the predictions of an ML model by an additional output, the explanation. This increase in the amount of outputs potentizes the number of possible adversarial goals. An adversary may fool the prediction, the explanation, or both simultaneously. With the term 'fooling', we capture diverse incentives, e.g., showing a target explanation or injecting a backdoor. I research these attacks with varying threat models, explanation methods, model architectures, and application domains. My research highlights the necessity of robustness guarantees for XAI, which I hope to be able to provide at some point.

📯 Latest News [back to top]

06.June 2025

Two extended abstracts have been accepted at the 48th German Conference on Artificial Intelligence in Potsdam, Germany.

17.Apr 2025

My poster Fooling XAI with Explanation-Aware Backdoors has been accepted for the Helmholtz AI Conference 2025 in Karlsruhe, Germany.

14.Apr 2025

My talk The Threat of Explanation-Aware Attacks has been accepted for the Helmholtz AI Conference 2025 in Karlsruhe, Germany. In the talk, I will give an overview on explanation-aware attacks based on our paper and the paper's of others.

29.Mar 2025

We are happy to annouce that our paper Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion has been accepted to the 20th ACM ASIA Conference on Computer and Communications Security (ASIA CCS 2025) in Hanoi, Vietnam. In the paper, we investigate if code models can be tricked into suggesting vulnerable source code. Interestingly, this malicious effect can be achieved without adding vulnerable code snippets to the training data, as our study shows. Therefore, our attacks by-pass static analysis and other defensive technique that aim to ensure the security coding assistents. In our research, we found that none of the evaluated defenses can prevent our attack effectively, except for one: Fine-Pruning is effective but requires a trusted clean data set --- which is the problem in the first place.

24.Mar 2025

Our submission POMELO: Black-Box Feature Attribution with Full-Input, In-Distribution Perturbations has been accepted to the 3rd World Conference on eXplainable Artificial Intelligence in Istanbul, Turkey. In the paper, we propose POMELO, an extension to the popular explanation method LIME. With POMELO explanation can be founded on full-input perturbations instead of segment-wise perturbations. Therefore, far apart and spatially speard correlation can be better captured, compared to related work.

17.Mar 2025

Our paper Composite Explanation-Aware Attacks has been accepted to the 8th Deep Learning Security and Privacy Workshop in San Francisco, CA, USA. In the paper, we investigate how explanation-aware inference time attacks can disguise prediction-only (vanilla) backdooring attacks. Thereby, we make the first steps to decouple the attack against the explanation from the attack against the prediction.

«show more»
15.Oct 2024

We have uploaded our preprint titled Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion to ArXiv. In the paper, we investigate if code models can be tricked into suggesting vulnerable source code. Interestingly, this malicious effect can be achieved without adding vulnerable code snippets to the training data, as our study shows. Therefore, our attacks by-pass static analysis and other defensive technique that aim to ensure the security coding assistents. In our research, we found that none of the evaluated defenses can prevent our attack effectively, except for one: Fine-Pruning is effective but requires a trusted clean data set --- which is the problem in the first place.

24.Sep 2024

I lust presented our extended abstract A Brief Systematization of Explanation-Aware Attacks at the 47th German Conference on Artificial Intelligence 2024 in Würzburg, Germany. In the paper, we summarized our SP24 Systematization of Knowledge paper on three pagess, including the three important dimesions: The capabilities of the adversary, the scopes of the attack, and the attack types. The paper serves an basic introduction to the problem of explanation-aware attacks.

27.Jun 2024

Our extended abstract paper A Brief Systematization of Explanation-Aware Attacks is accepted to the 47th German Conference on Artificial Intelligence 2024 in Würzburg, Germany. I'm looking forward to interesting discussions at the conference.

21.May 2024

Today, I will present our S&P paper SoK: Explainable Machine Learning in Adversarial Environments at the 45th IEEE Symposium on Security and Privacy 2024 in San Francisco, CA, USA. In the paper, Chris and I systematized the field of explanation-aware attacks. We discussed different relevant threat models, scopes of attacks, and attack types. We presented a hierachy of explanation-aware robustness notions and discussed various defensive techniques from the view point of explanation-aware attacks. I am looking forward for the questions and discussions with the community.

26.Mar 2024

I just gave a talk The Threat of Explanation-Aware Attacks: The Example of Explanation-Aware Backdoors in the XAI seminar of the Ludwig Maximilian University in Munich and the University Bremen. In the talk, I summarized the learnings from my two paper on explanation-aware attacks. Thanks for the invitation and thanks for having me.

25.Nov 2023

For the next few days, I will visit the ACM Conference on Computer and Communications Security (CCS) in Copenhagen, DK. I will present my poster Poster: Fooling XAI with Explanation-Aware Backdoors. there. And I'm looking forward to exciting discussions with other researchers in the community.

09.Oct 2023

I'll be in Berlin for a research stay at TU Berlin until November 25th 2023. I am looking forward to meeting exciting people in person.

27.Sep 2023

On September 27th I presented our extended abstract Explanation-Aware Backdoors in a Nutshell at the 46th German Conference for Artificial Intelligence (KI) in Berlin, Germany. Thanks everybody for the interesting discussions on the security and the future of explainable machine learning.

18.Sep 2023

We published the camera-ready version of our paper Poster: Fooling XAI with Explanation-Aware Backdoors, which has been accepted for the 30th ACM Conference on Computer and Communications Security (CCS) November 26-30 2023 in Copenhagen, DK. I'm very happy to present the poster there and have interesting discussions with you.

18.Aug 2023

We just published the camera-ready version of our paper SoK: Explainable Machine Learning in Adversarial Environments, which has been accepted for the 45th IEEE Symposium on Security and Privacy 2024 in San Francisco. Have a nice read.

11.Aug 2023

Our SoK-paper SoK: Explainable Machine Learning in Adversarial Environments has been accepted for the 45th IEEE Symposium on Security and Privacy 2024 in San Francisco. I am happy to get to San Francisco next year and meet all of you there.

01.Aug 2023

I'm happy to announce that I will serve the Privacy Enhancing Technologies Symposium (PETS) in 2024 and 2025 as artifact co-chair together with Pasin Manurangsi. I am looking forward to your artifact submissions. Also, we will change the workflow slightly starting this year. Please find details on the PETS Artifacts Review page. If you got any comments on the new process, feel free to write an email.

26.Jun 2023

Our short abstract paper Explanation-Aware Backdoors in a Nutshell has been accepted for the 46th German Conference on Artificial Intelligence in September 2023 in Berlin, Germany. I am happy to meet the (not only) german research community on machine learning there.

10.Nov 2022

Our paper Disguising Attacks with Explanation-Aware Backdoors has been accepted for the 44th IEEE Symposium on Security and Privacy in May 2023 in San Francisco.

👨‍💻 Publications [back to top]

2025

Makrut Attacks Against Black-Box Explanations.
Achyut Hegde, Maximilian Noppel and Christian Wressnegger.
Proc. of 48th German Conference on Artificial Intelligence, September 2025.

Exploiting Contexts of LLM-based Code-Completion.
Maximilian Noppel*, Karl Rubel* and Christian Wressnegger.
Proc. of 48th German Conference on Artificial Intelligence, September 2025.

Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion.
Karl Rubel*, Maximilian Noppel* and Christian Wressnegger.
Proc. of 20th ACM Asia Conference on Computer and Communications Security (ASIA CCS), August 2025.

POMELO: Black-Box Feature Attribution with Full-Input, In-Distribution Perturbations.
Luan Ademi*, Maximilian Noppel* and Christian Wressnegger.
Proc. of 3rd World Conference on eXplainable Artificial Intelligence (XAI), July 2025.

Composite Explanation-Aware Attacks.
Maximilian Noppel and Christian Wressnegger.
Proc. of 8th IEEE Deep Learning Security and Privacy Workshop (DLSP), May 2025.

2024

Model-Manipulation Attacks Against Black-Box Explanations.
Achyut Hegde, Maximilian Noppel and Christian Wressnegger.
Proc. of 40th Annual Computer Security Applications Conference (ACSAC), December 2024.
Best Paper Award Runner-Up

A Brief Systematization of Explanation-Aware Attacks.
Maximilian Noppel and Christian Wressnegger.
Proc. of 47th German Conference on Artificial Intelligence, September 2024.

SoK: Explainable Machine Learning in Adversarial Environments.
Maximilian Noppel and Christian Wressnegger.
Proc. of 45th IEEE Symposium on Security and Privacy (S&P), May 2024.

2023

Poster: Fooling XAI with Explanation-Aware Backdoors.
Maximilian Noppel and Christian Wressnegger.
Proc. of 30th ACM Conference on Computer and Communications Security (CCS), November 2023.

Explanation-Aware Backdoors in a Nutshell.
Maximilian Noppel and Christian Wressnegger.
Proc. of 46th German Conference on Artificial Intelligence, September 2023.

Disguising Attacks with Explanation-Aware Backdoors.
Maximilian Noppel, Lukas Peter and Christian Wressnegger.
Proc. of 44th IEEE Symposium on Security and Privacy (S&P), May 2023.

2022

Backdooring Explainable Machine Learning.
Maximilian Noppel, Lukas Peter and Christian Wressnegger.
Technical report, arXiv:2204.09498, April 2022.

2021

LaserShark: Establishing Fast, Bidirectional Communication into Air-Gapped Systems.
Niclas Kühnapfel, Stefan Preußler, Maximilian Noppel, Thomas Schneider, Konrad Rieck and Christian Wressnegger.
Proc. of 37th Annual Computer Security Applications Conference (ACSAC), December 2021.

Plausible Deniability for Anonymous Communication.
Christiane Kuhn*, Maximilian Noppel*, Christian Wressnegger and Thorsten Strufe.
Proc. of 21st Workshop on Privacy in the Electronic Society (WPES), November 2021.

2019

GI Elections with POLYAS: a Road to End-to-End Verifiable Elections.
Bernhard Beckert, Achim Brelle, Rüdiger Grimm, Nicolas Huber, Michael Kirsten, Ralf Küsters, Jörn Müller-Quade, Maximilian Noppel, Kai Reinhard, Jonas Schwab, Rebecca Schwerdt, Tomasz Truderung, Melanie Volkamer and Cornelia Winter.
Proc. of 4th International Joint Conference on Electronic Voting (E-Vote-ID), October 2019.

👫 Service [back to top]

Chairs

Committee Memberships

Convention of the Scientific Staff and Council for Devision II

As a member of the Convention of the Scientific Staff (German: "Mitarbeiterkonvent") and as a member of the Council for Devision II (German: "Bereichsrat für Bereich II") I am happy to receive your emails regarding any suggestions for the future development of the KIT.

👨‍🏫 Teaching [back to top]

Selected Courses

  • Lecture: Machine Learning for Security
  • Lecture: Security of Machine Learning
  • Lecture: IT-Security
  • Seminar: Explainable Machine Learning
  • Seminar: Hot Topics in Explainable Machine Learning
  • Seminar: Hot Topics in Security of Machine Learning
  • Seminar: Vulnerability Discovery
  • Seminar: Public Key Cryptography
  • Practical Course: Application Security
  • Practical Research Seminar: Explainable Machine Learning
  • Workshop: Business Planning in Cybersecurity for Founders

Selected Theses

  • Master's: Vulnerability Discovery in Solidity Code by C. Michelbach (2021)
  • Master's: Backdooring Authorship Attribution by S. Strang (2022)
  • Master's: Context-Aware Backdooring Attacks on Code Generation Models by K. Rubel (2023)
  • Bachelor's: Improving Model Explanations with In-Distribution Sampling by L. Ademi (2025)

🎤 Talks and Non-Academic Stuff [back to top]

Talk: The Threat of Explanation-Aware Attacks.
Helmholtz AI Conference (HAICON), Karlsruhe, Germany, June 2025.

Talk: The Threat of Explanation-Aware Attacks.
Seminar @ LMU & University Bremen, Munich, Germany, March 2024.

Talk: The Threat of Explanation-Aware Attacks.
Seminar @ TU Berlin, Berlin, Germany, October 2023.

Poster: Explanation-Aware Backdoors: Umgehen von erklärungsbasierten Erkennungsmethoden für Hintertüren. (de)
KASTEL StartupSecurty Community Congress - Poster Session, Karlsruhe, Germany, May 2023.

🤿 Sparetime [back to top]

In my spare time I founded the hackerspace vspace.one e.V. in 2016 and several other clubs, e.g. to promote local musicians. I love open source software and open hardware projects in general. This includes little arduino projects but also my homebrew relay cpu project. In addition, I'm working on mechanical projects, using CNC mills or 3D printers, or I organize events like CodeGolfings, LightningTalks, Hackathons, Hackerjeopardyparties, or Cryptoparties. I am also an active ham radio operator with the call sign DC0MX. You can find me in the university ham radio group DF0UK. If you are interested in sports, you can find me as a trainer in the underwaterrugby team of the SSC Karlsruhe as well as the KIT university team.