Skip to content

Commit 3f4f6e2

Browse files
committed
update README: new CLIP backbones
1 parent b55239b commit 3f4f6e2

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

README.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -129,6 +129,15 @@ Select the cuda ID as an integer for encoding of images and texts; ID for evalua
129129

130130
**Default:** 0 and 1
131131

132+
`--backbone`
133+
Select the openai ViT CLIP backbone. Available options are:
134+
- **b32**
135+
- **b16**
136+
- **l14**
137+
- **l14@336px**
138+
139+
**Default:** `b32`
140+
132141

133142
## 📈 Results
134143
Our evaluation demonstrates that the proposed method significantly outperforms baselines in the classname-free setup, minimizing artificial gains from the ensembling effect. Additionally, we show that these improvements transfer to the conventional evaluation setup, achieving competitive results with substantially fewer descriptions required, while offering better interpretability.
@@ -154,5 +163,5 @@ If you use this codebase or otherwise found our work valuable, please cite our p
154163
- [x] **[17.12.2024]** add valid arXiv link and bibtex.
155164
- [x] **[03.12.2024]** supported all datasets and tested with the env specified.
156165
- [x] **[27.11.2024]** set up the repo.
157-
- [ ] **[TBD]** Support ViT-L Backone
166+
- [x] **[TBD]** Support ViT-L Backone
158167
- [ ] **[TBD]** Pytorch Dataloader for Image Embeddings

0 commit comments

Comments
 (0)