Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion

Abstract

While convenient, relying on LLM-powered code assistants in day-to-day work gives rise to severe attacks. For instance, the assistant might introduce subtle flaws and suggest vulnerable code to the user. These adversarial code-suggestions can be introduced via data poisoning and, thus, unknowingly by the model creators. We provide a generalized formulation of such attacks, spawning and extending related work in this domain. Our formulation is defined over two components: First, a trigger pattern occurring in the prompts of a specific user group, and, second, a learnable map in embedding space from the prompt to an adversarial bait. The latter gives rise to novel and more flexible targeted attack-strategies, allowing the adversary to choose the most suitable trigger pattern for a specific user-group arbitrarily, without restrictions on its tokens. Our directional-map attacks and prompt-indexing attacks increase the stealthiness decisively. We extensively evaluate the effectiveness of these attacks and carefully investigate defensive mechanisms to explore the limits of generalized adversarial code-suggestions. We find that most defenses offer little protection only

For further details please consult the conference publication or have a look at the short summary.

Team

Proof-of-Concept Implementations

For the sake of reproducibility and to foster future research, we make the implementations of our generalized adversarial code suggestions available at:
https://github.com/intellisec/adv-code

Publication

A detailed description of our work was presented at the 20th ACM Asia Conference on Computer and Communications Security (ASIA CCS) in August 2025. Moreover, a summary of our work has been published at the Proc. of the 48th German Conference on Artificial Intelligence and presented in September 2025. If you would like to cite our work, please use the reference as provided below:

@InProceedings{Rubel2025Generalized,
author    = {Karl Rubel and Maximilian Noppel and Christian Wressnegger},
booktitle = {Proc. of the 20th {ACM} Asia Conference on Computer and Communications Security ({ASIA CCS})},
title     = {Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion},
year      = 2025,
month     = aug,
day       = {25.-29.},
}
@InProceedings{Noppel2025Exploiting,
author    = {Maximilian Noppel and Karl Rubel and Christian Wressnegger},
booktitle = {Proc. of the 48th German Conference on Artificial Intelligence}},
title     = {Exploiting Contexts of LLM-based Code-Completion},
year      = {2025},
month     = sep,
day       = {16.-19.},
}
A preprint of the paper is available here.