This repository contains the official code for the paper "Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs," which addresses the detection of unlearning traces in large language models (LLMs). The repository is actively being updated and provides various documentation files related to data, installation, and responses. Researchers are encouraged to cite the work if they find it beneficial.
unlearning ✓
+ llm
detection ✓
code-repository ✓
research ✓