Skip to content

The implementation of "Learning Aligned Cross-Modal Representations for Generalized Zero-Shot Classification""

Notifications You must be signed in to change notification settings

seeyourmind/ACMR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ACMR

Original PyTorch implementation of "Learning Aligned Cross-Modal Representations for Generalized Zero-Shot Classification".

Requirements

The code was implemented using Python 3.6.0 and trained on one NIVIDIA GeForce GTX TITAN X GPU .

The following packages:

torch==1.5.0
numpy==1.19.4
scipy==1.2.1
tqdm==4.54.1
scikit_learn==0.23.2

Data

Download the following CADA and put it in this repository. Next to the folder "model", there should be a folder "data".

Citation

If this work is helpful for you, please cite our paper.

@inproceedings{FangZY2022ACMR,
  author    = {Fang, Zhiyu and 
               Zhu, Xiaobin and 
               Yang, Chun and 
               Han, Zheng and 
               Qin, Jingyan and 
               Yin, Xu-Chengg},
  title     = {Learning Aligned Cross-Modal Representation for Generalized Zero-Shot 
               Classification},
  booktitle = {36th AAAI Conference on Artificial Intelligence},
  year      = {2022},
}

Ackowledgement

We thank the CADA_VAE repos providing helpful components in our work.

About

The implementation of "Learning Aligned Cross-Modal Representations for Generalized Zero-Shot Classification""

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages