Skip to content

LinyeLi60/M-EDL

Repository files navigation

Stop Guessing: Choosing the Optimization-Consistent Uncertainty Measurement for Evidential Deep Learning

This repository contains the official implementation of the paper:

Stop Guessing: Choosing the Optimization-Consistent Uncertainty Measurement for Evidential Deep Learning

Linye Li, Yufei Chen, Xiaodong Yue, Xujing Zhou, Qunjie Chen

The Fourteenth International Conference on Learning Representations (ICLR 2026)

Paper | OpenReview


Overview

This codebase provides a comprehensive implementation for evaluating and comparing different uncertainty measurements in Evidential Deep Learning (EDL). The implementation supports multiple EDL variants including EDL, I-EDL, R-EDL, and Re-EDL, and includes tools for uncertainty visualization and analysis.

Dependencies

The following dependencies were used during development. While specific versions may not be critical for method performance, we recommend using the versions listed below for reproducibility:

  • GPU: RTX A6000 (CUDA support required)
  • Python: 3.10.18
  • PyTorch: 1.13.1

Installation

Install the required packages:

pip install -r requirements.txt

Data Preparation

The required datasets (CIFAR-10, CIFAR-100, SVHN, GTSRB, Places365, Food101) will be automatically downloaded if your server has an Internet connection. No manual data preparation is required.

Pre-trained Models

Pre-trained models for EDL, I-EDL, R-EDL, and Re-EDL can be downloaded from the Google Drive link provided by the Re-EDL authors.

After downloading, unzip the models and place them in the ./saved_models/ directory.

Usage

Testing Pre-trained Models

To test pre-trained models, run:

python main.py --configid "2_cifar10/cifar10-{method-name}-test" --suffix test

Replace {method-name} with one of: edl, iedl, redl, or reedl.

Training from Scratch

To train models from scratch, use:

# General training command
python main.py --configid "2_cifar10/cifar10-{method-name}-train" --suffix test

# Specific examples
python main.py --configid "2_cifar10/cifar10-edl-train-exp-uce" --suffix test
python main.py --configid "3_cifar100/cifar100-edl-train-exp-uce" --suffix test

Uncertainty Visualization

The uncertainty_comparison_visualization.py script provides t-SNE-based visualizations to compare uncertainty distributions with class/OOD distributions.

Features

  • Side-by-side t-SNE plots: uncertainty values (left) vs class/OOD labels (right)
  • Support for multiple uncertainty types: max_prob, max_alpha, alpha0, differential_entropy, mutual_information, edl_mpu
  • Configurable visualization parameters (point size, transparency, figure size)
  • Performance optimization for large datasets

Usage

python uncertainty_comparison_visualization.py

Configuration

Edit the script to modify visualization parameters:

  • results_dir: Path to experimental results
  • config_id: Model configuration ID (e.g., "cifar10-edl-exp-uce-test")
  • id_dataset: In-distribution dataset name (e.g., "CIFAR10")
  • selected_classes: List of classes to visualize (e.g., [0, 1, 2])
  • ood_datasets: List of out-of-distribution datasets (e.g., ["SVHN", "CIFAR100"])
  • uncertainty_types: Types of uncertainty to visualize
  • perplexity, learning_rate, n_iter: t-SNE parameters

Available Uncertainty Types

  • alpha0: Sum of alpha values (similar to Vacuity of Evidence)
  • differential_entropy: Differential entropy
  • mutual_information: Mutual information
  • edl_mpu: The proposed margin-aware predictive uncertainty

Citation

If you find this code useful in your research, please cite our paper:

@inproceedings{
li2026stop,
title={Stop Guessing: Choosing the Optimization-Consistent Uncertainty Measurement for Evidential Deep Learning},
author={Linye Li and Yufei Chen and Xiaodong Yue and Xujing Zhou and Qunjie Chen},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=rGoJxYibgj}
}

Acknowledgments

This codebase is built upon the open-source implementation from the TPAMI 2025 paper "Revisiting Essential and Nonessential Settings of Evidential Deep Learning" by Mengyuan Chen, Junyu Gao, and Changsheng Xu from the Institute of Automation, Chinese Academy of Sciences. We thank the authors for making their code publicly available.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors