Skip to content

Commit e0b187a

Browse files
Merge pull request #193 from Azure-Samples/pafarley-updates
fix term
2 parents 683c0c4 + 9115a0d commit e0b187a

File tree

6 files changed

+13
-13
lines changed

6 files changed

+13
-13
lines changed

python/ComputerVision/REST/python-analyze.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
In this quickstart, you'll analyze a remotely stored image to extract visual features using the Computer Vision REST API. With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/56f91f2e778daf14a499f21b) method, you can extract visual features based on image content.
44

5-
You can run this quickstart in a step-by step fashion using a Jupyter notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
5+
You can run this quickstart in a step-by step fashion using a Jupyter Notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
66

77
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
88

@@ -30,7 +30,7 @@ To create and run the sample, do the following steps:
3030

3131
```python
3232
import requests
33-
# If you are using a Jupyter notebook, uncomment the following line.
33+
# If you are using a Jupyter Notebook, uncomment the following line.
3434
# %matplotlib inline
3535
import matplotlib.pyplot as plt
3636
import json

python/ComputerVision/REST/python-disk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
In this quickstart, you'll analyze a locally stored image to extract visual features using the Computer Vision REST API. With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/56f91f2e778daf14a499f21b) method, you can extract visual features based on image content.
55

6-
You can run this quickstart in a step-by step fashion using a Jupyter notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
6+
You can run this quickstart in a step-by step fashion using a Jupyter Notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
77

88
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
99

@@ -33,7 +33,7 @@ To create and run the sample, do the following steps:
3333
import os
3434
import sys
3535
import requests
36-
# If you are using a Jupyter notebook, uncomment the following line.
36+
# If you are using a Jupyter Notebook, uncomment the following line.
3737
# %matplotlib inline
3838
import matplotlib.pyplot as plt
3939
from PIL import Image

python/ComputerVision/REST/python-domain.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
In this quickstart, you'll use a domain model to identify landmarks or, optionally, celebrities in a remotely stored image using the Computer Vision REST API. With the [Recognize Domain Specific Content](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/56f91f2e778daf14a499f311) method, you can apply a domain-specific model to recognize content within an image.
55

6-
You can run this quickstart in a step-by step fashion using a Jupyter notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
6+
You can run this quickstart in a step-by step fashion using a Jupyter Notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
77

88
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
99

@@ -30,7 +30,7 @@ To create and run the landmark sample, do the following steps:
3030
import os
3131
import sys
3232
import requests
33-
# If you are using a Jupyter notebook, uncomment the following line.
33+
# If you are using a Jupyter Notebook, uncomment the following line.
3434
# %matplotlib inline
3535
import matplotlib.pyplot as plt
3636
from PIL import Image
@@ -112,7 +112,7 @@ To create and run the landmark sample, do the following steps:
112112

113113
```python
114114
import requests
115-
# If you are using a Jupyter notebook, uncomment the following line.
115+
# If you are using a Jupyter Notebook, uncomment the following line.
116116
# %matplotlib inline
117117
import matplotlib.pyplot as plt
118118
from PIL import Image

python/ComputerVision/REST/python-hand-text.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ In this quickstart, you'll extract printed and handwritten text from an image us
88
99
---
1010

11-
You can run this quickstart in a step-by step fashion using a Jupyter notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
11+
You can run this quickstart in a step-by step fashion using a Jupyter Notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
1212

1313
[![The launch Binder button](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
1414

@@ -37,7 +37,7 @@ import os
3737
import sys
3838
import requests
3939
import time
40-
# If you are using a Jupyter notebook, uncomment the following line.
40+
# If you are using a Jupyter Notebook, uncomment the following line.
4141
# %matplotlib inline
4242
import matplotlib.pyplot as plt
4343
from matplotlib.patches import Polygon

python/ComputerVision/REST/python-print-text.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
88
In this quickstart, you will extract printed text with optical character recognition (OCR) from an image using the Computer Vision REST API. With the [OCR](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/56f91f2e778daf14a499f20d) method, you can detect printed text in an image and extract recognized characters into a machine-usable character stream.
99

10-
You can run this quickstart in a step-by step fashion using a Jupyter notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
10+
You can run this quickstart in a step-by step fashion using a Jupyter Notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
1111

1212
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
1313

@@ -35,7 +35,7 @@ To create and run the sample, do the following steps:
3535
import os
3636
import sys
3737
import requests
38-
# If you are using a Jupyter notebook, uncomment the following line.
38+
# If you are using a Jupyter Notebook, uncomment the following line.
3939
# %matplotlib inline
4040
import matplotlib.pyplot as plt
4141
from matplotlib.patches import Rectangle

python/ComputerVision/REST/python-thumb.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ To create and run the sample, copy the following code into the code editor.
2020
import os
2121
import sys
2222
import requests
23-
# If you are using a Jupyter notebook, uncomment the following lines.
23+
# If you are using a Jupyter Notebook, uncomment the following lines.
2424
# %matplotlib inline
2525
# import matplotlib.pyplot as plt
2626
from PIL import Image
@@ -73,7 +73,7 @@ A successful response is returned as binary data which represents the image data
7373

7474
## Run in Jupyter (optional)
7575

76-
You can optionally run this quickstart in a step-by step fashion using a Jupyter notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
76+
You can optionally run this quickstart in a step-by step fashion using a Jupyter Notebook on [MyBinder](https://mybinder.org). To launch Binder, select the following button:
7777

7878
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
7979

0 commit comments

Comments
 (0)