Awesome Machine Unlearning (A Survey of Machine Unlearning)
-
Updated
Apr 14, 2025 - Jupyter Notebook
Awesome Machine Unlearning (A Survey of Machine Unlearning)
A resource repository for machine unlearning in large language models
A one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE and is an easily extensible framework for new datasets, evaluations, methods, and other benchmarks.
[ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"
[NeurIPS 2024] Large Language Model Unlearning via Embedding-Corrupted Prompts
"Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu
Implementation of the probabilistic evaluation framework introduced in the paper: "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" accepted at ICLR 2025 (Oral).
The project is for LLM unlearning, trustworthy AI.
The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.
A curated repository of resources on bias and machine unlearning in Large Language Models.
Add a description, image, and links to the llm-unlearning topic page so that developers can more easily learn about it.
To associate your repository with the llm-unlearning topic, visit your repo's landing page and select "manage topics."