You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: python/FormRecognizer/rest/python-labeled-data.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ You need OCR result files in order for the service to consider the corresponding
65
65
1. Call the **[Get Analyze Layout Result](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/GetAnalyzeLayoutResult)** API, using the operation ID from the previous step.
66
66
1. Get the response and write the content to a file. For each source form, the corresponding OCR file should have the original file name appended with `.ocr.json`. The OCR JSON output should have the following format. See the [sample OCR file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/Invoice_1.pdf.ocr.json) for a full example.
67
67
68
-
# [v2.0](#tab/v2-0)
68
+
####[v2.0](#tab/v2-0)
69
69
```json
70
70
{
71
71
"status": "succeeded",
@@ -114,7 +114,7 @@ You need OCR result files in order for the service to consider the corresponding
114
114
},
115
115
...
116
116
```
117
-
# [v2.1 preview](#tab/v2-1)
117
+
#### [v2.1 preview](#tab/v2-1)
118
118
```json
119
119
{
120
120
"status": "succeeded",
@@ -254,7 +254,7 @@ To train a model with labeled data, call the **[Train Custom Model](https://west
254
254
1. Replace `<SAS URL>` with the Azure Blob storage container's shared access signature (SAS) URL. To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read** and **List** permissions are checked, and click **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
255
255
1. Replace `<Blob folder name>` with the folder name in your blob container where the input data is located. Or, if your data is at the root, leave this blank and remove the `"prefix"` field from the body of the HTTP request.
256
256
257
-
# [v2.0](#tab/v2-0)
257
+
#### [v2.0](#tab/v2-0)
258
258
```python
259
259
########### Python Form Recognizer Labeled Async Train #############
260
260
import json
@@ -295,7 +295,7 @@ except Exception as e:
295
295
print("POST model failed:\n%s" % str(e))
296
296
quit()
297
297
```
298
-
# [v2.1 preview](#tab/v2-1)
298
+
#### [v2.1 preview](#tab/v2-1)
299
299
```python
300
300
########### Python Form Recognizer Labeled Async Train #############
301
301
import json
@@ -455,7 +455,7 @@ Next, you'll use your newly trained model to analyze a document and extract key-
455
455
1. Replace `<file type>` with the file type. Supported types: `application/pdf`, `image/jpeg`, `image/png`, `image/tiff`.
456
456
1. Replace `<subscription key>` with your subscription key.
457
457
458
-
# [v2.0](#tab/v2-0)
458
+
#### [v2.0](#tab/v2-0)
459
459
```python
460
460
########### Python Form Recognizer Async Analyze #############
461
461
import json
@@ -491,7 +491,7 @@ Next, you'll use your newly trained model to analyze a document and extract key-
491
491
print("POST analyze failed:\n%s" % str(e))
492
492
quit()
493
493
```
494
-
# [v2.1 preview](#tab/v2-1)
494
+
#### [v2.1 preview](#tab/v2-1)
495
495
```python
496
496
########### Python Form Recognizer Async Analyze #############
497
497
import json
@@ -575,7 +575,7 @@ print("Analyze operation did not complete within the allocated time.")
575
575
576
576
When the process is completed, you'll receive a `202 (Success)` response with JSON content in the following format. The response has been shortened for simplicity. The main key/value associations are in the `"documentResults"` node. The `"selectionMarks"` node (in v2.1 preview) shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected". The Layout API results (the content and positions of all the text in the document) are in the `"readResults"` node.
577
577
578
-
# [v2.0](#tab/v2-0)
578
+
#### [v2.0](#tab/v2-0)
579
579
```json
580
580
{
581
581
"status": "succeeded",
@@ -710,7 +710,7 @@ When the process is completed, you'll receive a `202 (Success)` response with JS
Copy file name to clipboardExpand all lines: python/FormRecognizer/rest/python-layout.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ To start analyzing the layout, you call the **[Analyze Layout](https://westus2.d
39
39
1. Replace `<path to your form>` with the path to your local form document.
40
40
1. Replace `<subscription key>` with the subscription key you copied from the previous step.
41
41
42
-
# [v2.0](#tab/v2-0)
42
+
####[v2.0](#tab/v2-0)
43
43
```python
44
44
########### Python Form Recognizer Async Layout #############
45
45
@@ -73,7 +73,7 @@ To start analyzing the layout, you call the **[Analyze Layout](https://westus2.d
73
73
print("POST analyze failed:\n%s"%str(e))
74
74
quit()
75
75
```
76
-
# [v2.1 preview](#tab/v2-1)
76
+
#### [v2.1 preview](#tab/v2-1)
77
77
```python
78
78
########### Python Form Recognizer Async Layout #############
79
79
@@ -118,11 +118,11 @@ To start analyzing the layout, you call the **[Analyze Layout](https://westus2.d
118
118
119
119
You'll receive a `202 (Success)` response that includes an **Operation-Location** header, which the script will print to the console. This header contains an operation ID that you can use to query the status of the asynchronous operation and get the results. In the following example value, the string after `operations/` is the operation ID.
Copy file name to clipboardExpand all lines: python/FormRecognizer/rest/python-receipts.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ To start analyzing a receipt, you call the **[Analyze Receipt](https://westus2.d
40
40
1. Replace `<path to your receipt>` with the path to your local form document.
41
41
1. Replace `<subscription key>` with the subscription key you copied from the previous step.
42
42
43
-
# [v2.0](#tab/v2-0)
43
+
####[v2.0](#tab/v2-0)
44
44
45
45
```python
46
46
########### Python Form Recognizer Async Receipt #############
@@ -80,7 +80,7 @@ To start analyzing a receipt, you call the **[Analyze Receipt](https://westus2.d
80
80
quit()
81
81
```
82
82
83
-
# [v2.1-preview.2](#tab/v2-1)
83
+
####[v2.1-preview.2](#tab/v2-1)
84
84
```python
85
85
########### Python Form Recognizer Async Receipt #############
86
86
@@ -133,11 +133,11 @@ To start analyzing a receipt, you call the **[Analyze Receipt](https://westus2.d
133
133
134
134
You'll receive a `202 (Success)` response that includes an **Operation-Location** header, which the script will print to the console. This header contains an operation ID that you can use to query the status of the asynchronous operation and get the results. In the following example value, the string after `operations/` is the operation ID.
Copy file name to clipboardExpand all lines: python/FormRecognizer/rest/python-train-extract.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ To train a Form Recognizer model with the documents in your Azure blob container
47
47
1. Replace `<Blob folder name>` with the path to the folder in blob storage where your forms are located. If your forms are at the root of your container, leave this string empty.
48
48
1. Optionally replace `<your model name>` with the friendly name you'd like to give to your model.
49
49
50
-
# [v2.0](#tab/v2-0)
50
+
####[v2.0](#tab/v2-0)
51
51
```python
52
52
########### Python Form Recognizer Labeled Async Train #############
53
53
import json
@@ -89,7 +89,7 @@ To train a Form Recognizer model with the documents in your Azure blob container
89
89
print("POST model failed:\n%s"%str(e))
90
90
quit()
91
91
```
92
-
# [v2.1 preview](#tab/v2-1)
92
+
#### [v2.1 preview](#tab/v2-1)
93
93
```python
94
94
########### Python Form Recognizer Labeled Async Train #############
95
95
import json
@@ -252,7 +252,7 @@ Next, you'll use your newly trained model to analyze a document and extract key-
252
252
1. Replace `<file type>` with the file type. Supported types: `application/pdf`, `image/jpeg`, `image/png`, `image/tiff`.
253
253
1. Replace `<subscription key>` with your subscription key.
254
254
255
-
# [v2.0](#tab/v2-0)
255
+
####[v2.0](#tab/v2-0)
256
256
```python
257
257
########### Python Form Recognizer Async Analyze #############
258
258
import json
@@ -288,7 +288,7 @@ Next, you'll use your newly trained model to analyze a document and extract key-
288
288
print("POST analyze failed:\n%s"%str(e))
289
289
quit()
290
290
```
291
-
# [v2.1 preview](#tab/v2-1)
291
+
#### [v2.1 preview](#tab/v2-1)
292
292
```python
293
293
########### Python Form Recognizer Async Analyze #############
294
294
import json
@@ -375,7 +375,7 @@ When the process is completed, you'll receive a `200 (Success)` response with JS
375
375
376
376
This sample JSON output has been shortened for simplicity.
377
377
378
-
# [v2.0](#tab/v2-0)
378
+
####[v2.0](#tab/v2-0)
379
379
```JSON
380
380
{
381
381
"status": "succeeded",
@@ -502,7 +502,7 @@ This sample JSON output has been shortened for simplicity.
0 commit comments