London AI4Code: "Security Implications of Large Language Model Code Assistants" with Brendan Dolan-Gavitt
Advances in Deep Learning have led to the emergence of Large Language Models (LLMs) such as OpenAI Codex which powers GitHub Copilot. LLMs have been fine tuned and packaged so that programmers can use them in an Integrated Development Environment (IDE) to write code. An emerging line of work is assessing the code quality of code written with the help of these LLMs, with security studies warning that LLMs do not fundamentally have any understanding of the code they are writing, so they are more likely to make mistakes that may be exploitable. In this meetup, Brendan Dolan-Gavitt will present a user study (N=58) conducted to assess the security of code written by student programmers when guided by LLMs.
Paper: https://arxiv.org/abs/2208.09727
Brendan Dolan-Gavitt is an Assistant Professor in the Computer Science and Engineering Department at NYU Tandon. His research interests span many areas of cybersecurity, including program analysis, virtualization security, memory forensics, and embedded and cyber-physical systems. His research focuses on developing techniques to ease or automate the understanding of large, real-world software systems in order to develop novel defenses against attacks, typically by subjecting them to static and dynamic analyses that reveal hidden and undocumented assumptions about their design and behavior.
Personal website: https://moyix.net/