@@ -9,7 +9,7 @@ Deliver open source, ready-to-use and extendable solution to this competition. T
9
9
10
10
## Results
11
11
12
- Our approach got ` 0.943 ` Average Precision and ` 0.954 ` Average Recall on stage 1 data.
12
+ Our approach got ` 0.943 ` Average Precision and ` 0.954 ` Average Recall on stage 1 data. You can check our experiments [ here ] ( https://app.neptune.ml/neptune-ml/Mapping-Challange )
13
13
Some examples (no cherry-picking I promise):
14
14
15
15
<img src =" https://gist.githubusercontent.com/jakubczakon/cac72983726a970690ba7c33708e100b/raw/0f88863b18904b23d4301611ddf2b532aff8de96/example_output.png " ></img >
@@ -120,38 +120,108 @@ Changing that to average probability over the object region improved results. Wh
120
120
121
121
122
122
## Installation
123
- 1 . clone this repository: ` git clone https://github.com/minerva-ml/open-solution-talking-data .git `
124
- 2 . install [ PyTorch ] ( http://pytorch.org/ ) and ` torchvision `
125
- 3 . install requirements: ` pip3 install -r requirements.txt `
123
+ 1 . clone this repository: ` git clone https://github.com/minerva-ml/open-solution-mapping-challenge .git `
124
+ 2 . install requirements: ` pip3 install -r requirements.txt `
125
+ 3 . download the data from the competition site [ dataset files ] ( https://www.crowdai.org/challenges/mapping-challenge/dataset_files )
126
126
4 . register to [ Neptune] ( https://neptune.ml/ ' machine learning lab ') * (if you wish to use it)* login via:
127
127
128
128
``` bash
129
129
$ neptune login
130
130
```
131
131
132
- 5 . open [ Neptune] ( https://neptune.ml/ ' machine learning lab ') and create new project called: ` Mapping Challenge ` with project key: ` MC `
133
- 6 . download the data from the competition site
134
- 7 . upload the data to neptune (if you want to run computation in the cloud) via:
132
+ open [ Neptune] ( https://neptune.ml/ ' machine learning lab ') and create new project called: ` Mapping Challenge ` with project key: ` MC ` *
133
+
134
+ upload the data to neptune (if you want to run computation in the cloud) via:
135
135
``` bash
136
136
$ neptune data upload YOUR/DATA/FOLDER
137
137
```
138
138
139
- 8 . change paths in the ` neptune.yaml ` .
139
+ 5 . prepare training data
140
140
141
- ``` yaml
142
- data_dir : /path/to/data
143
- meta_dir : /path/to/data
144
- masks_overlayed_dir : /path/to/masks_overlayed
145
- experiment_dir : /path/to/work/dir
146
- ` ` `
141
+ set paths in ` neptune.yaml `
142
+
143
+ ``` yaml
144
+ data_dir: /path/to/data
145
+ meta_dir: /path/to/data
146
+ masks_overlayed_prefix : masks_overlayed
147
+ experiment_dir: /path/to/work/dir
148
+ ```
149
+
150
+ change erosion/dilation setup if in ` neptune.yaml ` you want to:
151
+ Suggested setup is:
152
+
153
+ ``` yaml
154
+ border_width : 0
155
+ small_annotations_size: 14
156
+ erode_selem_size: 0
157
+ dilate_selem_size: 0
158
+ ```
159
+
160
+
161
+ * local machine with neptune
162
+ ``` bash
163
+ $ neptune login
164
+ $ neptune experiment run \
165
+ main.py -- prepare_metadata --train_data --valid_data --test_data
166
+ ```
167
+
168
+ * cloud via neptune
169
+
170
+ ` ` ` bash
171
+ $ neptune login
172
+ $ neptune experiment send --config neptune.yaml \
173
+ --worker gcp-large \
174
+ --environment pytorch-0.2.0-gpu-py3 \
175
+ main.py -- prepare_metadata --train_data --valid_data --test_data
176
+ ` ` `
177
+
178
+ * local pure python
179
+
180
+ ` ` ` bash
181
+ $ python main.py -- prepare_metadata --train_data --valid_data --test_data
182
+ ` ` `
183
+
184
+ 6. train model:
185
+
186
+ * local machine with neptune
187
+ ` ` ` bash
188
+ $ neptune login
189
+ $ neptune experiment run \
190
+ main.py -- train --pipeline_name unet_weighted
191
+ ` ` `
192
+
193
+ * cloud via neptune
194
+
195
+ ` ` ` bash
196
+ $ neptune login
197
+ $ neptune experiment send --config neptune.yaml \
198
+ --worker gcp-large \
199
+ --environment pytorch-0.2.0-gpu-py3 \
200
+ main.py -- train --pipeline_name unet_weighted
201
+ ` ` `
202
+
203
+ * local pure python
204
+
205
+ ` ` ` bash
206
+ $ python main.py train --pipeline_name unet_weighted
207
+ ` ` `
208
+
209
+ 7. evaluate model and predict on test data:
210
+ Change values in the configuration file ` neptune.yaml` .
211
+ Suggested setup is:
147
212
148
- 9. run experiment:
213
+ ` ` ` yaml
214
+ tta_aggregation_method: gmean
215
+ loader_mode: resize
216
+ erode_selem_size: 0
217
+ dilate_selem_size: 2
218
+ ` ` `
149
219
150
220
* local machine with neptune
151
221
` ` ` bash
152
222
$ neptune login
153
223
$ neptune experiment run \
154
- main.py -- train_evaluate_predict --pipeline_name unet --chunk_size 5000
224
+ main.py -- evaluate_predict --pipeline_name unet_tta --chunk_size 1000
155
225
` ` `
156
226
157
227
* cloud via neptune
@@ -161,13 +231,13 @@ $ neptune data upload YOUR/DATA/FOLDER
161
231
$ neptune experiment send --config neptune.yaml \
162
232
--worker gcp-large \
163
233
--environment pytorch-0.2.0-gpu-py3 \
164
- main.py -- train_evaluate_predict --pipeline_name solution_1 --chunk_size 5000
234
+ main.py -- evaluate_predict --pipeline_name unet_tta --chunk_size 1000
165
235
` ` `
166
236
167
237
* local pure python
168
238
169
239
` ` ` bash
170
- $ python main.py train_evaluate_predict --pipeline_name unet --chunk_size 5000
240
+ $ python main.py evaluate_predict --pipeline_name unet_tta --chunk_size 1000
171
241
` ` `
172
242
173
243
# # User support
0 commit comments