Skip to content

Commit

Permalink
fix: MD033/no-inline-html
Browse files Browse the repository at this point in the history
- Escape elements swallowed by Markdown
- Remove some unrenderd placeholders not updated
  • Loading branch information
nschonni committed Apr 12, 2019
1 parent 821ab66 commit 54aa238
Show file tree
Hide file tree
Showing 39 changed files with 64 additions and 56 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ To configure Azure AD single sign-on with ForeSee CX Suite, perform the followin
a. In the **Sign-on URL** text box, type a URL:
`https://cxsuite.foresee.com/`

b. In the **Identifier** textbox, type a URL using the following pattern: https://www.okta.com/saml2/service-provider/<UniqueID>
b. In the **Identifier** textbox, type a URL using the following pattern: `https://www.okta.com/saml2/service-provider/<UniqueID>`

> [!Note]
> If the **Identifier** value do not get auto polulated, then please fill in the value manually according to above pattern. The Identifier value is not real. Update this value with the actual Identifier. Contact [ForeSee CX Suite Client support team](mailto:[email protected]) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
Expand Down
2 changes: 1 addition & 1 deletion articles/active-directory/saas-apps/sap-fiori-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ To configure Azure AD single sign-on with SAP Fiori, perform the following steps
6. Replace **Provider Name** from T01122 to `http://T01122` and click on **Save**.

> [!NOTE]
> By default provider name come as <sid><client> format but Azure AD expects name in the format of <protocol>://<name>, recommending to maintain provider name as https://<sid><client> to allow multiple SAP Fiori ABAP engines to configure in Azure AD.
> By default provider name come as \<sid>\<client> format but Azure AD expects name in the format of \<protocol>://\<name>, recommending to maintain provider name as https\://\<sid>\<client> to allow multiple SAP Fiori ABAP engines to configure in Azure AD.

![The Certificate download link](./media/sapfiori-tutorial/tutorial-sapnetweaver-providername.png)

Expand Down
2 changes: 1 addition & 1 deletion articles/cognitive-services/Acoustics/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The Project Acoustics suite of plugins is an acoustics system that calculates so

You can download the [Project Acoustics Unity plugin](https://www.microsoft.com/download/details.aspx?id=57346) or the [Project Acoustics Unreal plugin](https://www.microsoft.com/download/details.aspx?id=58090).

## Does Project Acoustics support <x> platform?
## Does Project Acoustics support &lt;x&gt; platform?

Project Acoustics platform support evolves based on customer needs. Please contact us on the [Project Acoustics forums](https://social.msdn.microsoft.com/Forums/en-US/home?forum=projectacoustics) to inquire about support for additional platforms.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews**
- **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it.

> [!NOTE]
> **CreateVideoReviews** returns an IList<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.
> **CreateVideoReviews** returns an IList\<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.

Add the following method definition to namespace VideoReviews, class Program.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews**
- **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it.

> [!NOTE]
> **CreateVideoReviews** returns an IList<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.
> **CreateVideoReviews** returns an IList\<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.

Add the following method definition to namespace VideoReviews, class Program.

Expand Down
4 changes: 2 additions & 2 deletions articles/data-catalog/data-catalog-developer-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,9 +169,9 @@ Common types can be used as the types for properties, but are not Items.

<tr><td>DataSourceLocation</td><td></td><td></td><td></td></tr>
<tr><td></td><td>protocol</td><td>string</td><td>Required. Describes a protocol used to communicate with the data source. For example: "tds" for SQl Server, "oracle" for Oracle, etc. Refer to <a href="https://docs.microsoft.com/azure/data-catalog/data-catalog-dsr">Data source reference specification - DSL Structure</a> for the list of currently supported protocols.</td></tr>
<tr><td></td><td>address</td><td>Dictionary<string, object></td><td>Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it is meaningless without knowing the protocol.</td></tr>
<tr><td></td><td>address</td><td>Dictionary&lt;string, object&gt;</td><td>Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it is meaningless without knowing the protocol.</td></tr>
<tr><td></td><td>authentication</td><td>string</td><td>Optional. The authentication scheme used to communicate with the data source. For example: windows, oauth, etc.</td></tr>
<tr><td></td><td>connectionProperties</td><td>Dictionary<string, object></td><td>Optional. Additional information on how to connect to a data source.</td></tr>
<tr><td></td><td>connectionProperties</td><td>Dictionary&lt;string, object&gt;</td><td>Optional. Additional information on how to connect to a data source.</td></tr>

<tr><td>SecurityPrincipal</td><td></td><td></td><td>The backend does not perform any validation of provided properties against AAD during publishing.</td></tr>
<tr><td></td><td>upn</td><td>string</td><td>Unique email address of user. Must be specified if objectId is not provided or in the context of "lastRegisteredBy" property, otherwise optional.</td></tr>
Expand Down
2 changes: 0 additions & 2 deletions articles/data-catalog/data-catalog-dsr.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,8 @@ You can publish metadata by using a public API or a click-once registration tool
<td>✓</td>
<td>✓</td>
<td>
<font size="2">
</td>
<td>
<font size="2">
</td>
</tr>
<tr>
Expand Down
2 changes: 1 addition & 1 deletion articles/databox/data-box-deploy-copy-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ After you've connected to the SMB share, begin data copy. You can use any SMB co
|/z | Copies files in Restart mode, use this if the environment is unstable. This option reduces throughput due to additional logging. |
| /zb | Uses Restart mode. If access is denied, this option uses Backup mode. This option reduces throughput due to checkpointing. |
|/efsraw | Copies all encrypted files in EFS raw mode. Use only with encrypted files. |
|log+:<LogFile>| Appends the output to the existing log file.|
|log+:\<LogFile>| Appends the output to the existing log file.|
The following sample shows the output of the robocopy command to copy files to the Data Box.
Expand Down
2 changes: 1 addition & 1 deletion articles/dev-spaces/how-dev-spaces-works.md
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ The *install.set* property allows you to configure one or more values you want r

In the above example, the *install.set.replicaCount* property tells the controller how many instances of your application to run in your dev space. Depending on your scenario, you can increase this value, but it will have an impact on attaching a debugger to your application's pod. For more information, see the [troubleshooting article](troubleshooting.md).

In the generated Helm chart, the container image is set to *{{ .Values.image.repository }}:{{ .Values.image.tag }}*. The `azds.yaml` file defines *install.set.image.tag* property as *$(tag)* by default, which is used as the value for *{{ .Values.image.tag }}*. By setting the *install.set.image.tag* property in this way, it allows the container image for your application to be tagged in a distinct way when running Azure Dev Spaces. In this specific case, the image is tagged as *<value from image.repository>:$(tag)*. You must use the *$(tag)* variable as the value of *install.set.image.tag* in order for Dev Spaces recognize and locate the container in the AKS cluster.
In the generated Helm chart, the container image is set to *{{ .Values.image.repository }}:{{ .Values.image.tag }}*. The `azds.yaml` file defines *install.set.image.tag* property as *$(tag)* by default, which is used as the value for *{{ .Values.image.tag }}*. By setting the *install.set.image.tag* property in this way, it allows the container image for your application to be tagged in a distinct way when running Azure Dev Spaces. In this specific case, the image is tagged as *\<value from image.repository>:$(tag)*. You must use the *$(tag)* variable as the value of *install.set.image.tag* in order for Dev Spaces recognize and locate the container in the AKS cluster.

In the above example, `azds.yaml` defines *install.set.ingress.hosts*. The *install.set.ingress.hosts* property defines a host name format for public endpoints. This property also uses *$(spacePrefix)*, *$(rootSpacePrefix)*, and *$(hostSuffix)*, which are values provided by the controller.

Expand Down
2 changes: 1 addition & 1 deletion articles/dms/tutorial-mysql-azure-mysql-online.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ To complete this tutorial, you need to:
- Enable binary logging in the my.ini (Windows) or my.cnf (Unix) file in source database by using the following configuration:
- **server_id** = 1 or greater (relevant only for MySQL 5.6)
- **log-bin** =<path> (relevant only for MySQL 5.6)
- **log-bin** =\<path> (relevant only for MySQL 5.6)
For example: log-bin = E:\MySQL_logs\BinLog
- **binlog_format** = row
Expand Down
2 changes: 1 addition & 1 deletion articles/dns/dns-zones-records.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ At the level of the Azure DNS REST API, Etags are specified using HTTP headers.
| Header | Behavior |
| --- | --- |
| None |PUT always succeeds (no Etag checks) |
| If-match <etag> |PUT only succeeds if resource exists and Etag matches |
| If-match \<etag> |PUT only succeeds if resource exists and Etag matches |
| If-match * |PUT only succeeds if resource exists |
| If-none-match * |PUT only succeeds if resource does not exist |

Expand Down
2 changes: 1 addition & 1 deletion articles/hdinsight/hadoop/apache-hadoop-debug-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ When you create an HDInsight cluster, six tables are automatically created for L
* ambariserverlog
* ambariagentlog

The table file names are **u<ClusterName>DDMonYYYYatHHMMSSsss<TableName>**.
The table file names are **u\<ClusterName>DDMonYYYYatHHMMSSsss\<TableName>**.

These tables contain the following fields:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ first_name = MSFT_TEST_636573304831318844

last_name = MSFT_TEST_636573304831318844

lead_source = MSFT_TEST_636573304831318844-MSFT_TEST_636573304831318844|<Offer Name>
lead_source = MSFT_TEST_636573304831318844-MSFT_TEST_636573304831318844|\<Offer Name>

oid = 00Do0000000ZHog

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The following table lists error codes that could be returned in case an error wa
| Unknown |Unknown error while executing the task |
| ErrorDownloadingInputAssetMalformedContent |Category of errors that covers errors in downloading input asset such as bad file names, zero length files, incorrect formats and so on. |
| ErrorDownloadingInputAssetServiceFailure |Category of errors that covers problems on the service side - for example network or storage errors while downloading. |
| ErrorParsingConfiguration |Category of errors where task <see cref="MediaTask.PrivateData"/> (configuration) is not valid, for example the configuration is not a valid system preset or it contains invalid XML. |
| ErrorParsingConfiguration |Category of errors where task \<see cref="MediaTask.PrivateData"/> (configuration) is not valid, for example the configuration is not a valid system preset or it contains invalid XML. |
| ErrorExecutingTaskMalformedContent |Category of errors during the execution of the task where issues inside the input media files cause failure. |
| ErrorExecutingTaskUnsupportedFormat |Category of errors where the media processor cannot process the files provided - media format not supported, or does not match the Configuration. For example, trying to produce an audio-only output from an asset that has only video |
| ErrorProcessingTask |Category of other errors that the media processor encounters during the processing of the task that are unrelated to content. |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ public ActionResult Reports()
}
```

Task<ActionResult> Report(string reportId)
Task\<ActionResult> Report(string reportId)

```csharp
public async Task<ActionResult> Report(string reportId)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Add an ```enriched``` field as part of your index definition for debugging purpo

Missing content could be the result of documents getting dropped during indexing. Free and Basic tiers have low limits on document size. Any file exceeding the limit is dropped during indexing. You can check for dropped documents in the Azure portal. In the search service dashboard, double-click the Indexers tile. Review the ratio of successful documents indexed. If it is not 100%, you can click the ratio to get more detail.

If the problem is related to file size, you might see an error like this: "The blob <file-name>" has the size of <file-size> bytes, which exceeds the maximum size for document extraction for your current service tier." For more information on indexer limits, see [Service limits](search-limits-quotas-capacity.md).
If the problem is related to file size, you might see an error like this: "The blob \<file-name>" has the size of \<file-size> bytes, which exceeds the maximum size for document extraction for your current service tier." For more information on indexer limits, see [Service limits](search-limits-quotas-capacity.md).

A second reason for content failing to appear might be related input/output mapping errors. For example, an output target name is "People" but the index field name is lower-case "people". The system could return 201 success messages for the entire pipeline so you think indexing succeeded, when in fact a field is empty.

Expand Down
2 changes: 1 addition & 1 deletion articles/site-recovery/hyper-v-azure-troubleshoot.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ An app-consistent snapshot is a point-in-time snapshot of the application data i

2. To generate VSS snapshots for the VM, check that Hyper-V Integration Services are installed on the VM, and that the Backup (VSS) Integration Service is enabled.
- Ensure that the Integration Services VSS service/daemons are running on the guest, and are in an **OK** state.
- You can check this from an elevated PowerShell session on the Hyper-V host with command **et-VMIntegrationService -VMName<VMName>-Name VSS** You can also get this information by logging into the guest VM. [Learn more](https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/manage-hyper-v-integration-services).
- You can check this from an elevated PowerShell session on the Hyper-V host with command **et-VMIntegrationService -VMName\<VMName>-Name VSS** You can also get this information by logging into the guest VM. [Learn more](https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/manage-hyper-v-integration-services).
- Ensure that the Backup/VSS integration Services on the VM are running and in healthy state. If not, restart these services, and the Hyper-V Volume Shadow Copy requestor service on the Hyper-V host server.

### Common errors
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ You can customize a recovery plan by adding a script or manual action. Note that
a. Type in a name for the action, and type in action instructions. The person running the failover will see these instructions.
b. Specify whether you want to add the manual action for all types of failover (Test, Failover, Planned failover (if relevant)). Then click **OK**.
4. If you want to add a script, do the following:
a. If you're adding a VMM script, select **Failover to VMM script**, and in **Script Path** type the relative path to the share. For example, if the share is located at \\<VMMServerName>\MSSCVMMLibrary\RPScripts, specify the path: \RPScripts\RPScript.PS1.
a. If you're adding a VMM script, select **Failover to VMM script**, and in **Script Path** type the relative path to the share. For example, if the share is located at \\\<VMMServerName>\MSSCVMMLibrary\RPScripts, specify the path: \RPScripts\RPScript.PS1.
b. If you're adding an Azure automation run book, specify the **Azure Automation Account** in which the runbook is located, and select the appropriate **Azure Runbook Script**.
5. Run a test failover of the recovery plan to ensure that the script works as expected.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ Use the following steps to create a retention disk:
Select **Insert** to begin editing the file. Create a new line, and then insert the following text. Edit the disk multipath ID based on the highlighted multipath ID from the previous command.
**/dev/mapper/<Retention disks multipath id> /mnt/retention ext4 rw 0 0**
**/dev/mapper/\<Retention disks multipath id> /mnt/retention ext4 rw 0 0**
Select **Esc**, and then type **:wq** (write and quit) to close the editor window.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Dynamic data masking can be configured by the Azure SQL Database admin, server a

| Masking Function | Masking Logic |
| --- | --- |
| **Default** |**Full masking according to the data types of the designated fields**<br/><br/>• Use XXXX or fewer Xs if the size of the field is less than 4 characters for string data types (nchar, ntext, nvarchar).<br/>• Use a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).<br/>• Use 01-01-1900 for date/time data types (date, datetime2, datetime, datetimeoffset, smalldatetime, time).<br/>• For SQL variant, the default value of the current type is used.<br/>• For XML the document <masked/> is used.<br/>• Use an empty value for special data types (timestamp table, hierarchyid, GUID, binary, image, varbinary spatial types). |
| **Default** |**Full masking according to the data types of the designated fields**<br/><br/>• Use XXXX or fewer Xs if the size of the field is less than 4 characters for string data types (nchar, ntext, nvarchar).<br/>• Use a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).<br/>• Use 01-01-1900 for date/time data types (date, datetime2, datetime, datetimeoffset, smalldatetime, time).<br/>• For SQL variant, the default value of the current type is used.<br/>• For XML the document \<masked/> is used.<br/>• Use an empty value for special data types (timestamp table, hierarchyid, GUID, binary, image, varbinary spatial types). |
| **Credit card** |**Masking method, which exposes the last four digits of the designated fields** and adds a constant string as a prefix in the form of a credit card.<br/><br/>XXXX-XXXX-XXXX-1234 |
| **Email** |**Masking method, which exposes the first letter and replaces the domain with XXX.com** using a constant string prefix in the form of an email address.<br/><br/>[email protected] |
| **Random number** |**Masking method, which generates a random number** according to the selected boundaries and actual data types. If the designated boundaries are equal, then the masking function is a constant number.<br/><br/>![Navigation pane](./media/sql-database-dynamic-data-masking-get-started/1_DDM_Random_number.png) |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,9 @@ The last step is to go to the target server, or servers, and generate the logins
> [!NOTE]
> If you want to grant user access to the secondary, but not to the primary, you can do that by altering the user login on the primary server by using the following syntax.
>
> ```sql
> ALTER LOGIN <login name> DISABLE
> ```
>
> DISABLE doesn’t change the password, so you can always enable it if needed.
Expand Down
Loading

0 comments on commit 54aa238

Please sign in to comment.