You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
###COCO: Download [train_captions](https://drive.google.com/file/d/1D3EzUK1d1lNhD2hAvRiKPThidiVbP2K_/view?usp=sharing) to `data/coco/annotations`.
33
-
34
-
Download [training images](http://images.cocodataset.org/zips/train2014.zip) and [validation images](http://images.cocodataset.org/zips/val2014.zip) and unzip (We use Karpathy et el. split).
35
-
### Flickr
36
-
TBD
37
-
### Flickr7KStyle
38
-
TBD
32
+
# Datasets
33
+
1. Download the datasets using the following links: [COCO](https://www.kaggle.com/datasets/shtvkumar/karpathy-splits), [Flickr30K](https://www.kaggle.com/datasets/shtvkumar/karpathy-splits), [FlickrStyle10k](https://zhegan27.github.io/Papers/FlickrStyle_v0.9.zip).
34
+
2. Parse the data to the correct format using our script parse_karpathy.py, just make sure to edit head the json paths inside the script.
39
35
40
36
41
37
#Training
42
-
Extract CLIP features using:
38
+
Make sure to edit head the json or pkl paths inside the scripts.
39
+
1. Extract CLIP features using the following script:
0 commit comments