Skip to content

Algolzw/FoD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Forward-only Diffusion Probabilistic Models (FoD)
Official PyTorch Implementation

Project Page | Paper fod

How to Run the Code?

Dependencies

  • OS: Ubuntu 20.04
  • nvidia:
    • cuda: 12.2
  • python 3.11

Install

We advise you first create a virtual environment with:

python3 -m venv .env
source .env/bin/activate
pip install -U pip
pip install -r requirements.txt

Image Restoration

Here we provide an example for image deraining task, but can be changed to any problem by replacing the dataset.

Dataset Preparation

Download training and testing datasets and process it in a way such that rain images and no-rain images are in separately directories, as

#### for training dataset ####
datasets/rain/trainH/GT
datasets/rain/trainH/LQ

#### for testing dataset ####
datasets/rain/testH/GT
datasets/rain/testH/LQ

Then get into the image_restoration directory and modify the dataset paths in train_IR.py and sample_IR.py. You can also change the run-name and task-name to save models in different directories.

Train

The core algorithms for FoD is in image_restoration/diffusion/fod_diffusion.py.

You can train the model for image restoration following below bash scripts:

cd image_restoration

# For single GPU:
torchrun --nnodes=1 --nproc_per_node=1 --master_port=34567 train_IR.py --global-batch-size 16

# Change the nproc_per_node and global-batch-size for multi-GPU training
torchrun --nnodes=1 --nproc_per_node=4 --master_port=34567 train_IR.py --global-batch-size 64

Then the models and training logs will save in results/{task-name}.

Evaluation

To evaluate our method, please modify the benchmark path and model path and run

python sample_IR.py

Unconditional Generation

Train: Similar to the Image Restoration task, you can train the model on CIFAR-10 following below bash script:

cd image_generation

# For single GPU:
torchrun --nnodes=1 --nproc_per_node=1 --master_port=34567 train_cifar10.py --global-batch-size 128

# Change the nproc_per_node and global-batch-size for multi-GPU training
torchrun --nnodes=1 --nproc_per_node=4 --master_port=34567 train_cifar10.py --global-batch-size 128

Sampling: You need to first change the pretrained model path and sample images with the sample_cifar10.py:

python sample_cifar10.py

Evaluation

We include a ddp_sample_cifar10.py script which samples a large number of images from a pretrain FoD model in parallel. Run:

torchrun --nnodes=1 --nproc_per_node=1 --master_port=45678 ddp_sample_cifar10.py

To calculate FID, Inception Score, etc., you need to obtain a cifar_train.npz of cifar10, and run:

python scripts/evaluator.py cifar_train.npz samples/cifar10/U-Net-0500000-seed-0.npz

Results

FoD forward diffusion process (click to expand)

FoD-process

Image Restoration Results (click to expand)

FoD-IR


Acknowledgment: Our FoD is based on DiT and guided-diffusion . Thanks for their code!

Contact

If you have any question, please contact: [email protected]

Citations

If our code helps your research or work, please consider citing our paper. The following are BibTeX references:

@article{luo2025forward,
  title={Forward-only Diffusion Probabilistic Models},
  author={Luo, Ziwei and Gustafsson, Fredrik K and Sj{\"o}lund, Jens and Sch{\"o}n, Thomas B},
  journal={arXiv preprint arXiv:2505.16733},
  year={2025}
}

--- Thanks for your interest! ---

statistics

visitors