Publications

SubvertGPT: Supply Chain Attacks in LLM Coding Assistants

Submitted as part of the requirements for award of the MSc in Information Security at Royal Holloway, University of London

The rapid adoption of AI-powered coding assistants, such as OpenAI’s Codex and Meta’s Code Llama, has significantly transformed software development by enhancing productivity and automating routine tasks. However, this shift introduces new security concerns, particularly the risk of supply chain attacks targeting AI models. If an adversary can subtly manipulate a model’s training data, they may be able to coerce it into generating insecure code, thereby introducing vulnerabilities into software systems at scale. This research explores the feasibility of such attacks through a controlled experiment.

Paper: PDF
Code Source: Pending