Stop Guessing: Choosing the Optimization-Consistent Uncertainty Measurement for Evidential Deep Learning
This repository contains the official implementation of the paper:
Stop Guessing: Choosing the Optimization-Consistent Uncertainty Measurement for Evidential Deep Learning
Linye Li, Yufei Chen, Xiaodong Yue, Xujing Zhou, Qunjie Chen
The Fourteenth International Conference on Learning Representations (ICLR 2026)
This codebase provides a comprehensive implementation for evaluating and comparing different uncertainty measurements in Evidential Deep Learning (EDL). The implementation supports multiple EDL variants including EDL, I-EDL, R-EDL, and Re-EDL, and includes tools for uncertainty visualization and analysis.
The following dependencies were used during development. While specific versions may not be critical for method performance, we recommend using the versions listed below for reproducibility:
- GPU: RTX A6000 (CUDA support required)
- Python: 3.10.18
- PyTorch: 1.13.1
Install the required packages:
pip install -r requirements.txtThe required datasets (CIFAR-10, CIFAR-100, SVHN, GTSRB, Places365, Food101) will be automatically downloaded if your server has an Internet connection. No manual data preparation is required.
Pre-trained models for EDL, I-EDL, R-EDL, and Re-EDL can be downloaded from the Google Drive link provided by the Re-EDL authors.
After downloading, unzip the models and place them in the ./saved_models/ directory.
To test pre-trained models, run:
python main.py --configid "2_cifar10/cifar10-{method-name}-test" --suffix testReplace {method-name} with one of: edl, iedl, redl, or reedl.
To train models from scratch, use:
# General training command
python main.py --configid "2_cifar10/cifar10-{method-name}-train" --suffix test
# Specific examples
python main.py --configid "2_cifar10/cifar10-edl-train-exp-uce" --suffix test
python main.py --configid "3_cifar100/cifar100-edl-train-exp-uce" --suffix testThe uncertainty_comparison_visualization.py script provides t-SNE-based visualizations to compare uncertainty distributions with class/OOD distributions.
- Side-by-side t-SNE plots: uncertainty values (left) vs class/OOD labels (right)
- Support for multiple uncertainty types:
max_prob,max_alpha,alpha0,differential_entropy,mutual_information,edl_mpu - Configurable visualization parameters (point size, transparency, figure size)
- Performance optimization for large datasets
python uncertainty_comparison_visualization.pyEdit the script to modify visualization parameters:
results_dir: Path to experimental resultsconfig_id: Model configuration ID (e.g.,"cifar10-edl-exp-uce-test")id_dataset: In-distribution dataset name (e.g.,"CIFAR10")selected_classes: List of classes to visualize (e.g.,[0, 1, 2])ood_datasets: List of out-of-distribution datasets (e.g.,["SVHN", "CIFAR100"])uncertainty_types: Types of uncertainty to visualizeperplexity,learning_rate,n_iter: t-SNE parameters
alpha0: Sum of alpha values (similar to Vacuity of Evidence)differential_entropy: Differential entropymutual_information: Mutual informationedl_mpu: The proposed margin-aware predictive uncertainty
If you find this code useful in your research, please cite our paper:
@inproceedings{
li2026stop,
title={Stop Guessing: Choosing the Optimization-Consistent Uncertainty Measurement for Evidential Deep Learning},
author={Linye Li and Yufei Chen and Xiaodong Yue and Xujing Zhou and Qunjie Chen},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=rGoJxYibgj}
}This codebase is built upon the open-source implementation from the TPAMI 2025 paper "Revisiting Essential and Nonessential Settings of Evidential Deep Learning" by Mengyuan Chen, Junyu Gao, and Changsheng Xu from the Institute of Automation, Chinese Academy of Sciences. We thank the authors for making their code publicly available.