We provide sample evaluation scripts for the following datasets:
- COCO FID
- MJHQ-30k FID
- ImageNet Reconstruction
- GenEval
- DPG Bench
- CommonsenseT2I
- WISE
For COCO, MJHQ, and ImageNet Reconstruction, we provide the sample scripts to generate images. The scripts has arguments start_idx and end_idx to specify the range of the dataset to evaluate, users can use it for multiprocessing sampling on multiple GPUs. After sampling, users can run the eval scripts to get the numbers on single GPU.
For GenEval, DPG Bench, CommonsenseT2I, and WISE, we only provide the sample scripts to generate images. Users can use the corresponding eval scripts in these repos to get the numbers.
The dataset will be automatically downloaded from here into the dataset_folder.
python sample_coco.py \
--dataset_folder /path/to/cache_coco_dataset \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 3.0 \
--batch_size 1 \
--num_inference_steps 30 \python eval_coco.py \
--dataset_folder /path/to/cache_coco_dataset \
--image_folder /path/to/output \The dataset need to be manually downloaded from here:
cd /path/to/mjhq_dataset
git clone https://huggingface.co/datasets/playgroundai/MJHQ-30K
unzip mjhq30k_imgs.zippython sample_mjhq.py \
--dataset_folder /path/to/mjhq_dataset/MJHQ-30K \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 3.0 \
--batch_size 1 \
--num_inference_steps 30 \python eval_mjhq.py \
--dataset_folder /path/to/mjhq_dataset/MJHQ-30K \
--image_folder /path/to/output \The dataset will be automatically downloaded from here into the dataset_folder.
python sample_reconstruction.py \
--dataset_folder /path/to/cache_imagenet_dataset \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 3.0 \
--image_guidance_scale 3.0 \
--batch_size 1 \
--num_inference_steps 30 \python eval_reconstruction.py \
--dataset_folder /path/to/cache_imagenet_dataset \
--image_folder /path/to/output \The dataset will be automatically downloaded from here into the dataset_file.
python sample_geneval.py \
--dataset_file /path/to/geneval_dataset/evaluation_metadata.jsonl \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 7.5 \
--num_inference_steps 30 \
--seed 42 \For evaluation, users can use the corresponding eval scripts in here.
The dataset need to be manually downloaded from here:
cd /path/to/dpg_bench_dataset
git clone https://github.com/TencentQQGYLab/ELLA.gitpython sample_dpg.py \
--dataset_folder /path/to/dpg_bench_dataset/ELLA/dpg_bench/prompts \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 7.5 \
--batch_size 1 \
--num_inference_steps 30 \
--seed 42 \For evaluation, users can use the corresponding eval scripts in here.
The dataset will be automatically downloaded from here into the dataset_folder.
python sample_commonsenset2i.py \
--dataset_folder /path/to/cache_commonsense_t2i_dataset \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 7.5 \
--num_inference_steps 30 \
--seed 42 \For evaluation, users can use the corresponding eval scripts in here.
The dataset need to be manually downloaded from here:
cd /path/to/wise_dataset
git clone https://github.com/PKU-YuanGroup/WISE.gitpython sample_wise.py \
--dataset_folder /path/to/wise_dataset/WISE/data \
--start_idx 0 \
--end_idx -1 \
--output_dir /path/to/output \
--checkpoint_path /path/to/checkpoint \
--guidance_scale 7.5 \
--num_inference_steps 30 \
--seed 42 \For evaluation, users can use the corresponding eval scripts in here.