This is the code for DualAnomaly: A Dual Spatio-Temporal Cross-Attention Framework for Robust Video Anomaly Detection.
HSTforU: See HSTforU: Anomaly Detection in Aerial and Ground-based Videos with Hierarchical Spatio-Temporal Transformer for U-net.
ASTNet: See Attention-based Residual Autoencoder for Video Anomaly Detection.
The code can be run under any environment with Python 3.7 and above. (It may run with lower versions, but we have not tested it).
Install the required packages:
pip install -r requirements.txt
Clone this repo:
git clone https://github.com/vt-le/DualAnomaly.git
cd DualAnomaly/
We evaluate DualAnomaly on:
| Dataset | Link |
|---|---|
| UCSD Ped2 | |
| CUHK Avenue | |
| ShanghaiTech |
To train DualAnomaly on a dataset, run:
python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --master_port 12345 train.py --cfg <config-file>Please first download the pre-trained model
| Dataset | Pretrained Model |
|---|---|
| UCSD Ped2 | |
| CUHK Avenue | |
| ShanghaiTech |
To evaluate a pretrained DualAnomaly on a dataset, run:
python test.py \
--cfg <path/to/config/file> \
--pretrained </path/to/pre-trained/model> \
[--batch-size <batch-size> --tag <job-tag>]For example, to evaluate DualAnomaly on Ped2:
python test.py \
--cfg config/scripts/ped2/ped2_pvt2_hst.yaml \
--model-file output/DualAnomaly/ped2_pvt2_hst/ckpt_ped2.pth- We use YAML for configuration.
- We provide a couple preset configurations.
- Please refer to
config.pyfor documentation on what each configuration does.
If you find our work useful, please consider citing:
@Article{le2025dualanomaly,
author={Le, Viet-Tuan
and Kim, Yong-Guk},
title={DualAnomaly: A Dual Spatio-Temporal Cross-Attention Framework for Robust Video Anomaly Detection},
}For any question, please file an issue or contact:
Viet-Tuan Le: [email protected]