Visual Cognitive System Lab, ViCoS - UL, FRI - Ljubljana
Domen Tabernik, Jon Muhovič, Danije Skočaj
[Paper] [Pre-print paper] [arXiv] [BibTeX]
If using CeDiRNet please cite our paper using the following BibTeX entry:
@article{TabernikPR2024,
title = {Dense center-direction regression for object counting and localization with point supervision},
journal = {Pattern Recognition},
volume = {153},
pages = {110540},
year = {2024},
issn = {0031-3203},
doi = {https://doi.org/10.1016/j.patcog.2024.110540},
url = {https://www.sciencedirect.com/science/article/pii/S0031320324002917},
author = {Domen Tabernik and Jon Muhovič and Danijel Skočaj}
}Dependency:
- Python >= 3.6
- PyTorch >= 1.9
- segmentation_models_pytorch
- opencv-python
- numpy, scipy, scikit_image, scikit_learn
Recommended using Conda and installing dependencies as:
conda create -n=CeDiRNet python=3.6
conda activate CeDiRNet
# install correct pytorch version for CUDA, e.g., for CUDA 11.1:
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txtInference of images from folder:
python infer.py --input_folder /path/to/images --img_pattern "*.png" --output_folder out/ --config src/config/config_infer.json --model path/to/checkpoint.pth
# Usage: infer.py [-h] [--input_folder INPUT_FOLDER] [--img_pattern IMG_PATTERN]
# [--output_folder OUTPUT_FOLDER] [--config CONFIG]
# [--model MODEL] [--localization_model LOCALIZATION_MODEL]
#
# Process a folder of images with CeDiRNet.
#
# optional arguments:
# -h, --help show this help message and exit
# --input_folder INPUT_FOLDER
# path to folder with input images
# --img_pattern IMG_PATTERN
# pattern for input images
# --output_folder OUTPUT_FOLDER
# path to output folder
# --config CONFIG path to config file
# --model MODEL path to model checkpoint file
# --localization_model LOCALIZATION_MODEL
# (optional) path to localization model checkpoint file
# (will override one from model)Training of the localization network from synthetic data:
export DATASET=synt-center-learn-weakly
export OUTPUT_DIR="../exp" # optionally provided path to output dir used in src/config/synthetic/train_center_learn_weakly.py (defaults to '../exp')
python train.py --config train_dataset.batch_size=64 \
train_dataset.hard_samples_size=32 \
num_gpus=1 \
display=TrueFor training/evaluation on dataset (e.g., PUCPR+) run:
export DATASET=PUCPRplus # defines which config to use (see src/config/__init__.py)
export CARPK_DIR="/path/to/CARPK_ROOT_DIR" # path to where 'PUCPR+_devkit' folder is located (root of the CARPK dataset)
export OUTPUT_DIR="../exp" # optionally provided path to output dir used in src/config/PUCPRplus/*.py (defaults to '../exp')
# training
python train.py --config num_gpus=1 \
train_dataset.batch_size=4 \
pretrained_center_model_path="path/to/localization_net/checkpoint.pth" \
display=True
# testing (with trained localization network)
python test.py --config train_settings.num_gpus=1 \
train_settings.train_dataset.batch_size=4 \
dataset.kwargs.type="test" display=False
# testing (with hand-crafted localization network)
python test.py --config train_settings.num_gpus=1 \
train_settings.train_dataset.batch_size=4 \
center_checkpoint_name="handcrafted_localization" \
center_checkpoint_path=None \
center_model.kwargs.use_learnable_nn=False \
dataset.kwargs.type="test" display=FalseSee config files in src/config for more details on configuration. Note, that using --config in command line will override settings from files.
By default, config files use 8 GPUs for dataset training with 32 or 128 batch size (depending on dataset), and 16 GPUs for localization net training with 768 batch size.
You may add new dataset by providing config files and dataset class:
- add dataset class to
src/datasets/NEW_DATASET.pyand updatesrc/datasets/__init__.py - add
train.pyandtest.pyconfig files tosrc/config/NEW_DATASETand updatesrc/config/__init__.py
Scripts for running experiments related to the paper are in ./scripts/EXPERIMENTS_*.sh. Please edit the following configurations before first use:
- edit input/output paths and Conda environment in
./scripts/run_local.sh- set
ROOT_DIRto CeDiRNet code - set
CONDA_HOMEto conda home folder that containsetc/profile.d/conda.sh
- set
- edit
SERVERSenv var in./scripts/EXPERIMENTS_*.shfor distributed training (e.g,SERVERS="serverA:0,1,2,3 serverB:2,3 serverC:1,0") - edit
GPU_LISTenv var in./scripts/EXPERIMENTS_*.shfor parallel inference (e.g,GPU_LIST=("serverA:0" "serverA:1" "serverB:0" "serverC:0"))
Note that changing the number of GPUs used may require updating the batch size and hard sample size in configuration files.
You will also need to manuall download all datasets into ./datasets subfolders:
- Sorghum Plant Centers 2016 Dataset
- CARPK and PUCPR+ dataset
- Acacia-6, Acacia-16 and Oilpal dataset
- run
python ./datasets/tree_counting_dataset/create_patches.py [PATH_TO_TREE_DATASET]to create 512x512 patches
- run
Run experiments:
# Run training of the localization network:
./EXPERIMENTS_TRAIN_LOCALIZATION.sh
# Run training and evaluation on all datasets:
./EXPERIMENTS_MAIN.shBy default, scripts will use run_distributed.sh that will delegate work to different GPUs on different servers (for testing) or will execute distributed training by running train.py on each server with the appropriate WORLD_SIZE and RANK_OFFSET env vars.
For running on single server, just use localhost for server name in SERVERS and GPU_LIST, or alternatively replace run_distributed.sh with run_local.sh.
