Welcome to
Dormant Neurons

Our research group researches large language model (LLM) security and security in machine learning (ML) in general. LLMs have seen a surge in popularity in recent years being adopted for different tasks such as chatbots, text summarization and code generation. In other domains of ML, image, audio and video generation have fostered creative work and facilitated an abundance of tasks. In both cases the security of the models and the detection and prevention of misuse are of key importance making the academic research highly meaningful.

Dormant Neurons Team
Dormant Neurons as last seen on Jun 2024

Research

LLM Security

Our research of LLM security covers several topics. We research prompt injection attacks on LLMs in different settings and LLM's abilities to keep contextual information confidential. Prompt injection gets increasingly relevant with dual-use LLMs and app-integrated LLMs where potential sources in the web or a local database could embed malicious instructions. At the same time the LLMs ability to access local databases yields a likelhood of being able to retrieve sensitive information which must only be disclosed to authorized users.

On top of that, our research includes applications of LLMs for traditional security tasks to automate tasks traditionally done my manual work from security specialists. Examples are the LLM's potential to propose a secure version of vulnerable program code and deobfuscating code samples by removing semantically irrelevant components. We also use agentic LLM systems to automate security-related tasks such as code analysis.

Security in ML

Besides LLMs, we analyze security-related ML topics in general. One topic is continual learning and out-of-distribution detection. In a world, where datasets continually evolve such as in the case of malware is is of high importance to be able to detect new types of malware without requiring retraining the model. Furthermore, we research the efficacy of audio deep fake detectors, e.g., for maliciously intended audio files which are increasingly abused for scamming purposes. Related to that, we also analyze the robustness of existing AI image detectors which is another highly relevant topic with the increasing amount of deepfake visual content on the internet.

Latest News

Meet the Team

Lea Schönherr

Lea Schönherr

Group Leader

David Pape

David Pape

PhD Student

Jonathan Evertz

Jonathan Evertz

PhD Student

Sina Mavali

Sina Mavali

PhD Student

David Beste

David Beste

PhD Student

Soumya Shaw

Soumya Shaw

HiWi

Anupam Varshney

Anupam Varshney

HiWi

Jishitha Kondaveti

Jishitha Kondaveti

HiWi

Yage Zhang

Yage Zhang

HiWi

Abdul Rafay Syed

Abdul Rafay Syed

HiWi

Srishti Gupta

Srishti Gupta

Intern

Valentin Giraudeau

Valentin Giraudeau

Intern

Publications