Skip to content

Implementation of the probabilistic evaluation framework introduced in the paper: "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" accepted at ICLR 2025 (Oral).

License

Notifications You must be signed in to change notification settings

yascho/probabilistic-unlearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Probabilistic Perspective on Unlearning and Alignment for Large Language Models

Reference implementation of the probabilistic evaluation framework proposed in the paper:

A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
Yan Scholten, Stephan Günnemann, Leo Schwinn
International Conference on Learning Representations, ICLR 2025 (Oral)
[ Project page | PDF ]

Code Coming Soon

Training code and more supplementary materials will be released soon. In the meantime, you can explore our demo notebook, which demonstrates that greedy evaluations can misleadingly suggest successful unlearning, while our probabilistic evaluations provide more accurate assessments of model capabilities.

Cite

Please cite our paper if you use this code in your own work:

@inproceedings{scholten2024probabilistic,
    title={A Probabilistic Perspective on Unlearning and Alignment for Large Language Models},
    author={Yan Scholten and Stephan G{\"u}nnemann and Leo Schwinn},
    booktitle={The Thirteenth International Conference on Learning Representations},
    year={2025},
    url={https://openreview.net/forum?id=51WraMid8K}
}

Contact

For questions and feedback please contact:

Yan Scholten, Technical University of Munich
Stephan Günnemann, Technical University of Munich
Leo Schwinn, Technical University of Munich

License

The code by Yan Scholten, Stephan Günnemann and Leo Schwinn is licensed under MIT license.

About

Implementation of the probabilistic evaluation framework introduced in the paper: "A Probabilistic Perspective on Unlearning and Alignment for Large Language Models" accepted at ICLR 2025 (Oral).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published