Skip to content

Commit 50bbad6

Browse files
authored
Update README.md
1 parent d607db8 commit 50bbad6

File tree

1 file changed

+88
-18
lines changed

1 file changed

+88
-18
lines changed

README.md

+88-18
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Deliver open source, ready-to-use and extendable solution to this competition. T
99

1010
## Results
1111

12-
Our approach got `0.943` Average Precision and `0.954` Average Recall on stage 1 data.
12+
Our approach got `0.943` Average Precision and `0.954` Average Recall on stage 1 data. You can check our experiments [here](https://app.neptune.ml/neptune-ml/Mapping-Challange)
1313
Some examples (no cherry-picking I promise):
1414

1515
<img src="https://gist.githubusercontent.com/jakubczakon/cac72983726a970690ba7c33708e100b/raw/0f88863b18904b23d4301611ddf2b532aff8de96/example_output.png"></img>
@@ -120,38 +120,108 @@ Changing that to average probability over the object region improved results. Wh
120120

121121

122122
## Installation
123-
1. clone this repository: `git clone https://github.com/minerva-ml/open-solution-talking-data.git`
124-
2. install [PyTorch](http://pytorch.org/) and `torchvision`
125-
3. install requirements: `pip3 install -r requirements.txt`
123+
1. clone this repository: `git clone https://github.com/minerva-ml/open-solution-mapping-challenge.git`
124+
2. install requirements: `pip3 install -r requirements.txt`
125+
3. download the data from the competition site [dataset files](https://www.crowdai.org/challenges/mapping-challenge/dataset_files)
126126
4. register to [Neptune](https://neptune.ml/ 'machine learning lab') *(if you wish to use it)* login via:
127127

128128
```bash
129129
$ neptune login
130130
```
131131

132-
5. open [Neptune](https://neptune.ml/ 'machine learning lab') and create new project called: `Mapping Challenge` with project key: `MC`
133-
6. download the data from the competition site
134-
7. upload the data to neptune (if you want to run computation in the cloud) via:
132+
open [Neptune](https://neptune.ml/ 'machine learning lab') and create new project called: `Mapping Challenge` with project key: `MC`*
133+
134+
upload the data to neptune (if you want to run computation in the cloud) via:
135135
```bash
136136
$ neptune data upload YOUR/DATA/FOLDER
137137
```
138138

139-
8. change paths in the `neptune.yaml` .
139+
5. prepare training data
140140

141-
```yaml
142-
data_dir: /path/to/data
143-
meta_dir: /path/to/data
144-
masks_overlayed_dir: /path/to/masks_overlayed
145-
experiment_dir: /path/to/work/dir
146-
```
141+
set paths in `neptune.yaml`
142+
143+
```yaml
144+
data_dir: /path/to/data
145+
meta_dir: /path/to/data
146+
masks_overlayed_prefix: masks_overlayed
147+
experiment_dir: /path/to/work/dir
148+
```
149+
150+
change erosion/dilation setup if in `neptune.yaml` you want to:
151+
Suggested setup is:
152+
153+
```yaml
154+
border_width: 0
155+
small_annotations_size: 14
156+
erode_selem_size: 0
157+
dilate_selem_size: 0
158+
```
159+
160+
161+
* local machine with neptune
162+
```bash
163+
$ neptune login
164+
$ neptune experiment run \
165+
main.py -- prepare_metadata --train_data --valid_data --test_data
166+
```
167+
168+
* cloud via neptune
169+
170+
```bash
171+
$ neptune login
172+
$ neptune experiment send --config neptune.yaml \
173+
--worker gcp-large \
174+
--environment pytorch-0.2.0-gpu-py3 \
175+
main.py -- prepare_metadata --train_data --valid_data --test_data
176+
```
177+
178+
* local pure python
179+
180+
```bash
181+
$ python main.py -- prepare_metadata --train_data --valid_data --test_data
182+
```
183+
184+
6. train model:
185+
186+
* local machine with neptune
187+
```bash
188+
$ neptune login
189+
$ neptune experiment run \
190+
main.py -- train --pipeline_name unet_weighted
191+
```
192+
193+
* cloud via neptune
194+
195+
```bash
196+
$ neptune login
197+
$ neptune experiment send --config neptune.yaml \
198+
--worker gcp-large \
199+
--environment pytorch-0.2.0-gpu-py3 \
200+
main.py -- train --pipeline_name unet_weighted
201+
```
202+
203+
* local pure python
204+
205+
```bash
206+
$ python main.py train --pipeline_name unet_weighted
207+
```
208+
209+
7. evaluate model and predict on test data:
210+
Change values in the configuration file `neptune.yaml`.
211+
Suggested setup is:
147212

148-
9. run experiment:
213+
```yaml
214+
tta_aggregation_method: gmean
215+
loader_mode: resize
216+
erode_selem_size: 0
217+
dilate_selem_size: 2
218+
```
149219

150220
* local machine with neptune
151221
```bash
152222
$ neptune login
153223
$ neptune experiment run \
154-
main.py -- train_evaluate_predict --pipeline_name unet --chunk_size 5000
224+
main.py -- evaluate_predict --pipeline_name unet_tta --chunk_size 1000
155225
```
156226

157227
* cloud via neptune
@@ -161,13 +231,13 @@ $ neptune data upload YOUR/DATA/FOLDER
161231
$ neptune experiment send --config neptune.yaml \
162232
--worker gcp-large \
163233
--environment pytorch-0.2.0-gpu-py3 \
164-
main.py -- train_evaluate_predict --pipeline_name solution_1 --chunk_size 5000
234+
main.py -- evaluate_predict --pipeline_name unet_tta --chunk_size 1000
165235
```
166236

167237
* local pure python
168238

169239
```bash
170-
$ python main.py train_evaluate_predict --pipeline_name unet --chunk_size 5000
240+
$ python main.py evaluate_predict --pipeline_name unet_tta --chunk_size 1000
171241
```
172242

173243
## User support

0 commit comments

Comments
 (0)