diff --git a/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md b/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md index 31b0616437b23..7d2fceccc296f 100644 --- a/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md +++ b/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md @@ -116,7 +116,7 @@ To configure Azure AD single sign-on with ForeSee CX Suite, perform the followin a. In the **Sign-on URL** text box, type a URL: `https://cxsuite.foresee.com/` - b. In the **Identifier** textbox, type a URL using the following pattern: https://www.okta.com/saml2/service-provider/ + b. In the **Identifier** textbox, type a URL using the following pattern: `https://www.okta.com/saml2/service-provider/` > [!Note] > If the **Identifier** value do not get auto polulated, then please fill in the value manually according to above pattern. The Identifier value is not real. Update this value with the actual Identifier. Contact [ForeSee CX Suite Client support team](mailto:support@foresee.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. diff --git a/articles/active-directory/saas-apps/sap-fiori-tutorial.md b/articles/active-directory/saas-apps/sap-fiori-tutorial.md index 05a5b4d4737be..814679ce4e5e2 100644 --- a/articles/active-directory/saas-apps/sap-fiori-tutorial.md +++ b/articles/active-directory/saas-apps/sap-fiori-tutorial.md @@ -131,7 +131,7 @@ To configure Azure AD single sign-on with SAP Fiori, perform the following steps 6. Replace **Provider Name** from T01122 to `http://T01122` and click on **Save**. > [!NOTE] - > By default provider name come as format but Azure AD expects name in the format of ://, recommending to maintain provider name as https:// to allow multiple SAP Fiori ABAP engines to configure in Azure AD. + > By default provider name come as \\ format but Azure AD expects name in the format of \://\, recommending to maintain provider name as https\://\\ to allow multiple SAP Fiori ABAP engines to configure in Azure AD. ![The Certificate download link](./media/sapfiori-tutorial/tutorial-sapnetweaver-providername.png) diff --git a/articles/cognitive-services/Acoustics/faq.md b/articles/cognitive-services/Acoustics/faq.md index 2a6f4abc1f0d6..74d6b7db5e028 100644 --- a/articles/cognitive-services/Acoustics/faq.md +++ b/articles/cognitive-services/Acoustics/faq.md @@ -22,7 +22,7 @@ The Project Acoustics suite of plugins is an acoustics system that calculates so You can download the [Project Acoustics Unity plugin](https://www.microsoft.com/download/details.aspx?id=57346) or the [Project Acoustics Unreal plugin](https://www.microsoft.com/download/details.aspx?id=58090). -## Does Project Acoustics support platform? +## Does Project Acoustics support <x> platform? Project Acoustics platform support evolves based on customer needs. Please contact us on the [Project Acoustics forums](https://social.msdn.microsoft.com/Forums/en-US/home?forum=projectacoustics) to inquire about support for additional platforms. diff --git a/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md b/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md index f992110de53a2..0fe6df9448835 100644 --- a/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md +++ b/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md @@ -172,7 +172,7 @@ Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews** - **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it. > [!NOTE] -> **CreateVideoReviews** returns an IList. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property. +> **CreateVideoReviews** returns an IList\. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property. Add the following method definition to namespace VideoReviews, class Program. diff --git a/articles/cognitive-services/Content-Moderator/video-transcript-reviews-quickstart-dotnet.md b/articles/cognitive-services/Content-Moderator/video-transcript-reviews-quickstart-dotnet.md index 54f52d68bdcff..47e82d6ebce5b 100644 --- a/articles/cognitive-services/Content-Moderator/video-transcript-reviews-quickstart-dotnet.md +++ b/articles/cognitive-services/Content-Moderator/video-transcript-reviews-quickstart-dotnet.md @@ -159,7 +159,7 @@ Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews** - **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it. > [!NOTE] -> **CreateVideoReviews** returns an IList. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property. +> **CreateVideoReviews** returns an IList\. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property. Add the following method definition to namespace VideoReviews, class Program. diff --git a/articles/data-catalog/data-catalog-developer-concepts.md b/articles/data-catalog/data-catalog-developer-concepts.md index fcac9623f4cf6..80b96e5e8b7a6 100644 --- a/articles/data-catalog/data-catalog-developer-concepts.md +++ b/articles/data-catalog/data-catalog-developer-concepts.md @@ -169,9 +169,9 @@ Common types can be used as the types for properties, but are not Items. DataSourceLocation protocolstringRequired. Describes a protocol used to communicate with the data source. For example: "tds" for SQl Server, "oracle" for Oracle, etc. Refer to Data source reference specification - DSL Structure for the list of currently supported protocols. -addressDictionaryRequired. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it is meaningless without knowing the protocol. +addressDictionary<string, object>Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it is meaningless without knowing the protocol. authenticationstringOptional. The authentication scheme used to communicate with the data source. For example: windows, oauth, etc. -connectionPropertiesDictionaryOptional. Additional information on how to connect to a data source. +connectionPropertiesDictionary<string, object>Optional. Additional information on how to connect to a data source. SecurityPrincipalThe backend does not perform any validation of provided properties against AAD during publishing. upnstringUnique email address of user. Must be specified if objectId is not provided or in the context of "lastRegisteredBy" property, otherwise optional. diff --git a/articles/data-catalog/data-catalog-dsr.md b/articles/data-catalog/data-catalog-dsr.md index 188c54745e103..bf65ce4f18cf2 100644 --- a/articles/data-catalog/data-catalog-dsr.md +++ b/articles/data-catalog/data-catalog-dsr.md @@ -64,10 +64,8 @@ You can publish metadata by using a public API or a click-once registration tool ✓ ✓ - - diff --git a/articles/databox/data-box-deploy-copy-data.md b/articles/databox/data-box-deploy-copy-data.md index fca378a86b721..6050e7bad7df9 100644 --- a/articles/databox/data-box-deploy-copy-data.md +++ b/articles/databox/data-box-deploy-copy-data.md @@ -128,7 +128,7 @@ After you've connected to the SMB share, begin data copy. You can use any SMB co |/z | Copies files in Restart mode, use this if the environment is unstable. This option reduces throughput due to additional logging. | | /zb | Uses Restart mode. If access is denied, this option uses Backup mode. This option reduces throughput due to checkpointing. | |/efsraw | Copies all encrypted files in EFS raw mode. Use only with encrypted files. | -|log+:| Appends the output to the existing log file.| +|log+:\| Appends the output to the existing log file.| The following sample shows the output of the robocopy command to copy files to the Data Box. diff --git a/articles/dev-spaces/how-dev-spaces-works.md b/articles/dev-spaces/how-dev-spaces-works.md index b51a09e296694..7a69764369ca1 100644 --- a/articles/dev-spaces/how-dev-spaces-works.md +++ b/articles/dev-spaces/how-dev-spaces-works.md @@ -333,7 +333,7 @@ The *install.set* property allows you to configure one or more values you want r In the above example, the *install.set.replicaCount* property tells the controller how many instances of your application to run in your dev space. Depending on your scenario, you can increase this value, but it will have an impact on attaching a debugger to your application's pod. For more information, see the [troubleshooting article](troubleshooting.md). -In the generated Helm chart, the container image is set to *{{ .Values.image.repository }}:{{ .Values.image.tag }}*. The `azds.yaml` file defines *install.set.image.tag* property as *$(tag)* by default, which is used as the value for *{{ .Values.image.tag }}*. By setting the *install.set.image.tag* property in this way, it allows the container image for your application to be tagged in a distinct way when running Azure Dev Spaces. In this specific case, the image is tagged as *:$(tag)*. You must use the *$(tag)* variable as the value of *install.set.image.tag* in order for Dev Spaces recognize and locate the container in the AKS cluster. +In the generated Helm chart, the container image is set to *{{ .Values.image.repository }}:{{ .Values.image.tag }}*. The `azds.yaml` file defines *install.set.image.tag* property as *$(tag)* by default, which is used as the value for *{{ .Values.image.tag }}*. By setting the *install.set.image.tag* property in this way, it allows the container image for your application to be tagged in a distinct way when running Azure Dev Spaces. In this specific case, the image is tagged as *\:$(tag)*. You must use the *$(tag)* variable as the value of *install.set.image.tag* in order for Dev Spaces recognize and locate the container in the AKS cluster. In the above example, `azds.yaml` defines *install.set.ingress.hosts*. The *install.set.ingress.hosts* property defines a host name format for public endpoints. This property also uses *$(spacePrefix)*, *$(rootSpacePrefix)*, and *$(hostSuffix)*, which are values provided by the controller. diff --git a/articles/dms/tutorial-mysql-azure-mysql-online.md b/articles/dms/tutorial-mysql-azure-mysql-online.md index 5d57012f0b243..ee622356f8f38 100644 --- a/articles/dms/tutorial-mysql-azure-mysql-online.md +++ b/articles/dms/tutorial-mysql-azure-mysql-online.md @@ -59,7 +59,7 @@ To complete this tutorial, you need to: - Enable binary logging in the my.ini (Windows) or my.cnf (Unix) file in source database by using the following configuration: - **server_id** = 1 or greater (relevant only for MySQL 5.6) - - **log-bin** = (relevant only for MySQL 5.6) + - **log-bin** =\ (relevant only for MySQL 5.6) For example: log-bin = E:\MySQL_logs\BinLog - **binlog_format** = row diff --git a/articles/dns/dns-zones-records.md b/articles/dns/dns-zones-records.md index 9b1899c5739aa..b236cdc11754d 100644 --- a/articles/dns/dns-zones-records.md +++ b/articles/dns/dns-zones-records.md @@ -130,7 +130,7 @@ At the level of the Azure DNS REST API, Etags are specified using HTTP headers. | Header | Behavior | | --- | --- | | None |PUT always succeeds (no Etag checks) | -| If-match |PUT only succeeds if resource exists and Etag matches | +| If-match \ |PUT only succeeds if resource exists and Etag matches | | If-match * |PUT only succeeds if resource exists | | If-none-match * |PUT only succeeds if resource does not exist | diff --git a/articles/hdinsight/hadoop/apache-hadoop-debug-jobs.md b/articles/hdinsight/hadoop/apache-hadoop-debug-jobs.md index 142595375df29..d3e6020554c84 100644 --- a/articles/hdinsight/hadoop/apache-hadoop-debug-jobs.md +++ b/articles/hdinsight/hadoop/apache-hadoop-debug-jobs.md @@ -29,7 +29,7 @@ When you create an HDInsight cluster, six tables are automatically created for L * ambariserverlog * ambariagentlog -The table file names are **uDDMonYYYYatHHMMSSsss**. +The table file names are **u\DDMonYYYYatHHMMSSsss\**. These tables contain the following fields: diff --git a/articles/marketplace/lead-management-for-cloud-marketplace.md b/articles/marketplace/lead-management-for-cloud-marketplace.md index c6cd9bc165378..165f76e0d5403 100644 --- a/articles/marketplace/lead-management-for-cloud-marketplace.md +++ b/articles/marketplace/lead-management-for-cloud-marketplace.md @@ -134,7 +134,7 @@ first_name = MSFT_TEST_636573304831318844 last_name = MSFT_TEST_636573304831318844 -lead_source = MSFT_TEST_636573304831318844-MSFT_TEST_636573304831318844| +lead_source = MSFT_TEST_636573304831318844-MSFT_TEST_636573304831318844|\ oid = 00Do0000000ZHog diff --git a/articles/media-services/previous/media-services-encoding-error-codes.md b/articles/media-services/previous/media-services-encoding-error-codes.md index 3fa1de57c54a2..ab175dfdc4232 100644 --- a/articles/media-services/previous/media-services-encoding-error-codes.md +++ b/articles/media-services/previous/media-services-encoding-error-codes.md @@ -27,7 +27,7 @@ The following table lists error codes that could be returned in case an error wa | Unknown |Unknown error while executing the task | | ErrorDownloadingInputAssetMalformedContent |Category of errors that covers errors in downloading input asset such as bad file names, zero length files, incorrect formats and so on. | | ErrorDownloadingInputAssetServiceFailure |Category of errors that covers problems on the service side - for example network or storage errors while downloading. | -| ErrorParsingConfiguration |Category of errors where task (configuration) is not valid, for example the configuration is not a valid system preset or it contains invalid XML. | +| ErrorParsingConfiguration |Category of errors where task \ (configuration) is not valid, for example the configuration is not a valid system preset or it contains invalid XML. | | ErrorExecutingTaskMalformedContent |Category of errors during the execution of the task where issues inside the input media files cause failure. | | ErrorExecutingTaskUnsupportedFormat |Category of errors where the media processor cannot process the files provided - media format not supported, or does not match the Configuration. For example, trying to produce an audio-only output from an asset that has only video | | ErrorProcessingTask |Category of other errors that the media processor encounters during the processing of the task that are unrelated to content. | diff --git a/articles/power-bi-workspace-collections/get-started-sample.md b/articles/power-bi-workspace-collections/get-started-sample.md index 9412c6795698f..041963d633822 100644 --- a/articles/power-bi-workspace-collections/get-started-sample.md +++ b/articles/power-bi-workspace-collections/get-started-sample.md @@ -204,7 +204,7 @@ public ActionResult Reports() } ``` -Task Report(string reportId) +Task\ Report(string reportId) ```csharp public async Task Report(string reportId) diff --git a/articles/search/cognitive-search-concept-troubleshooting.md b/articles/search/cognitive-search-concept-troubleshooting.md index 780b1ec91d520..5691be7aa83c3 100644 --- a/articles/search/cognitive-search-concept-troubleshooting.md +++ b/articles/search/cognitive-search-concept-troubleshooting.md @@ -79,7 +79,7 @@ Add an ```enriched``` field as part of your index definition for debugging purpo Missing content could be the result of documents getting dropped during indexing. Free and Basic tiers have low limits on document size. Any file exceeding the limit is dropped during indexing. You can check for dropped documents in the Azure portal. In the search service dashboard, double-click the Indexers tile. Review the ratio of successful documents indexed. If it is not 100%, you can click the ratio to get more detail. -If the problem is related to file size, you might see an error like this: "The blob " has the size of bytes, which exceeds the maximum size for document extraction for your current service tier." For more information on indexer limits, see [Service limits](search-limits-quotas-capacity.md). +If the problem is related to file size, you might see an error like this: "The blob \" has the size of \ bytes, which exceeds the maximum size for document extraction for your current service tier." For more information on indexer limits, see [Service limits](search-limits-quotas-capacity.md). A second reason for content failing to appear might be related input/output mapping errors. For example, an output target name is "People" but the index field name is lower-case "people". The system could return 201 success messages for the entire pipeline so you think indexing succeeded, when in fact a field is empty. diff --git a/articles/site-recovery/hyper-v-azure-troubleshoot.md b/articles/site-recovery/hyper-v-azure-troubleshoot.md index 4fa6c2dfc02f0..88e593147b4d3 100644 --- a/articles/site-recovery/hyper-v-azure-troubleshoot.md +++ b/articles/site-recovery/hyper-v-azure-troubleshoot.md @@ -124,7 +124,7 @@ An app-consistent snapshot is a point-in-time snapshot of the application data i 2. To generate VSS snapshots for the VM, check that Hyper-V Integration Services are installed on the VM, and that the Backup (VSS) Integration Service is enabled. - Ensure that the Integration Services VSS service/daemons are running on the guest, and are in an **OK** state. - - You can check this from an elevated PowerShell session on the Hyper-V host with command **et-VMIntegrationService -VMName-Name VSS** You can also get this information by logging into the guest VM. [Learn more](https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/manage-hyper-v-integration-services). + - You can check this from an elevated PowerShell session on the Hyper-V host with command **et-VMIntegrationService -VMName\-Name VSS** You can also get this information by logging into the guest VM. [Learn more](https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/manage-hyper-v-integration-services). - Ensure that the Backup/VSS integration Services on the VM are running and in healthy state. If not, restart these services, and the Hyper-V Volume Shadow Copy requestor service on the Hyper-V host server. ### Common errors diff --git a/articles/site-recovery/site-recovery-create-recovery-plans.md b/articles/site-recovery/site-recovery-create-recovery-plans.md index 4aaeccd640a7a..fa5751f5fb444 100644 --- a/articles/site-recovery/site-recovery-create-recovery-plans.md +++ b/articles/site-recovery/site-recovery-create-recovery-plans.md @@ -74,7 +74,7 @@ You can customize a recovery plan by adding a script or manual action. Note that a. Type in a name for the action, and type in action instructions. The person running the failover will see these instructions. b. Specify whether you want to add the manual action for all types of failover (Test, Failover, Planned failover (if relevant)). Then click **OK**. 4. If you want to add a script, do the following: - a. If you're adding a VMM script, select **Failover to VMM script**, and in **Script Path** type the relative path to the share. For example, if the share is located at \\\MSSCVMMLibrary\RPScripts, specify the path: \RPScripts\RPScript.PS1. + a. If you're adding a VMM script, select **Failover to VMM script**, and in **Script Path** type the relative path to the share. For example, if the share is located at \\\\MSSCVMMLibrary\RPScripts, specify the path: \RPScripts\RPScript.PS1. b. If you're adding an Azure automation run book, specify the **Azure Automation Account** in which the runbook is located, and select the appropriate **Azure Runbook Script**. 5. Run a test failover of the recovery plan to ensure that the script works as expected. diff --git a/articles/site-recovery/vmware-azure-install-linux-master-target.md b/articles/site-recovery/vmware-azure-install-linux-master-target.md index bd28e0399c019..61659756f1b50 100644 --- a/articles/site-recovery/vmware-azure-install-linux-master-target.md +++ b/articles/site-recovery/vmware-azure-install-linux-master-target.md @@ -259,7 +259,7 @@ Use the following steps to create a retention disk: Select **Insert** to begin editing the file. Create a new line, and then insert the following text. Edit the disk multipath ID based on the highlighted multipath ID from the previous command. - **/dev/mapper/ /mnt/retention ext4 rw 0 0** + **/dev/mapper/\ /mnt/retention ext4 rw 0 0** Select **Esc**, and then type **:wq** (write and quit) to close the editor window. diff --git a/articles/sql-database/sql-database-dynamic-data-masking-get-started.md b/articles/sql-database/sql-database-dynamic-data-masking-get-started.md index 2105267e5016d..b851fad616848 100644 --- a/articles/sql-database/sql-database-dynamic-data-masking-get-started.md +++ b/articles/sql-database/sql-database-dynamic-data-masking-get-started.md @@ -37,7 +37,7 @@ Dynamic data masking can be configured by the Azure SQL Database admin, server a | Masking Function | Masking Logic | | --- | --- | -| **Default** |**Full masking according to the data types of the designated fields**

• Use XXXX or fewer Xs if the size of the field is less than 4 characters for string data types (nchar, ntext, nvarchar).
• Use a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).
• Use 01-01-1900 for date/time data types (date, datetime2, datetime, datetimeoffset, smalldatetime, time).
• For SQL variant, the default value of the current type is used.
• For XML the document is used.
• Use an empty value for special data types (timestamp table, hierarchyid, GUID, binary, image, varbinary spatial types). | +| **Default** |**Full masking according to the data types of the designated fields**

• Use XXXX or fewer Xs if the size of the field is less than 4 characters for string data types (nchar, ntext, nvarchar).
• Use a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).
• Use 01-01-1900 for date/time data types (date, datetime2, datetime, datetimeoffset, smalldatetime, time).
• For SQL variant, the default value of the current type is used.
• For XML the document \ is used.
• Use an empty value for special data types (timestamp table, hierarchyid, GUID, binary, image, varbinary spatial types). | | **Credit card** |**Masking method, which exposes the last four digits of the designated fields** and adds a constant string as a prefix in the form of a credit card.

XXXX-XXXX-XXXX-1234 | | **Email** |**Masking method, which exposes the first letter and replaces the domain with XXX.com** using a constant string prefix in the form of an email address.

aXX@XXXX.com | | **Random number** |**Masking method, which generates a random number** according to the selected boundaries and actual data types. If the designated boundaries are equal, then the masking function is a constant number.

![Navigation pane](./media/sql-database-dynamic-data-masking-get-started/1_DDM_Random_number.png) | diff --git a/articles/sql-database/sql-database-geo-replication-security-config.md b/articles/sql-database/sql-database-geo-replication-security-config.md index faf9e20b76789..29cf7aa1f45f8 100644 --- a/articles/sql-database/sql-database-geo-replication-security-config.md +++ b/articles/sql-database/sql-database-geo-replication-security-config.md @@ -83,7 +83,9 @@ The last step is to go to the target server, or servers, and generate the logins > [!NOTE] > If you want to grant user access to the secondary, but not to the primary, you can do that by altering the user login on the primary server by using the following syntax. > +> ```sql > ALTER LOGIN DISABLE +> ``` > > DISABLE doesn’t change the password, so you can always enable it if needed. diff --git a/articles/storage/common/storage-client-side-encryption-java.md b/articles/storage/common/storage-client-side-encryption-java.md index 3ae4885f0d38d..16206c4885cfc 100644 --- a/articles/storage/common/storage-client-side-encryption-java.md +++ b/articles/storage/common/storage-client-side-encryption-java.md @@ -112,7 +112,7 @@ There are three Key Vault packages: 1. Create a secret offline and upload it to Key Vault. 2. Use the secret's base identifier as a parameter to resolve the current version of the secret for encryption and cache this information locally. Use CachingKeyResolver for caching; users are not expected to implement their own caching logic. 3. Use the caching resolver as an input while creating the encryption policy. - More information regarding Key Vault usage can be found in the encryption code samples. + More information regarding Key Vault usage can be found in the encryption code samples. ## Best practices Encryption support is available only in the storage client library for Java. @@ -136,7 +136,7 @@ While creating an EncryptionPolicy object, users can provide only a Key (impleme * The key resolver is invoked if specified to get the key. If the resolver is specified but does not have a mapping for the key identifier, an error is thrown. * If resolver is not specified but a key is specified, the key is used if its identifier matches the required key identifier. If the identifier does not match, an error is thrown. - The [encryption samples](https://github.com/Azure/azure-storage-net/tree/master/Samples/GettingStarted/EncryptionSamples) demonstrate a more detailed end-to-end scenario for blobs, queues and tables, along with Key Vault integration. + The [encryption samples](https://github.com/Azure/azure-storage-net/tree/master/Samples/GettingStarted/EncryptionSamples) demonstrate a more detailed end-to-end scenario for blobs, queues and tables, along with Key Vault integration. ### RequireEncryption mode Users can optionally enable a mode of operation where all uploads and downloads must be encrypted. In this mode, attempts to upload data without an encryption policy or download data that is not encrypted on the service will fail on the client. The **requireEncryption** flag of the request options object controls this behavior. If your application will encrypt all objects stored in Azure Storage, then you can set the **requireEncryption** property on the default request options for the service client object. diff --git a/articles/storage/common/storage-client-side-encryption-python.md b/articles/storage/common/storage-client-side-encryption-python.md index 948271a4fc78c..235c4d3e8321d 100644 --- a/articles/storage/common/storage-client-side-encryption-python.md +++ b/articles/storage/common/storage-client-side-encryption-python.md @@ -132,7 +132,7 @@ The key resolver must at least implement a method that, given a key id, returns * The key resolver is invoked if specified to get the key. If the resolver is specified but does not have a mapping for the key identifier, an error is thrown. * If resolver is not specified but a key is specified, the key is used if its identifier matches the required key identifier. If the identifier does not match, an error is thrown. - The encryption samples in azure.storage.samples demonstrate a more detailed end-to-end scenario for blobs, queues and tables. + The encryption samples in azure.storage.samples demonstrate a more detailed end-to-end scenario for blobs, queues and tables. Sample implementations of the KEK and key resolver are provided in the sample files as KeyWrapper and KeyResolver respectively. ### RequireEncryption mode diff --git a/articles/storage/common/storage-migration-to-premium-storage.md b/articles/storage/common/storage-migration-to-premium-storage.md index 4eaf225ea2c45..7d0f07dcbe25e 100644 --- a/articles/storage/common/storage-migration-to-premium-storage.md +++ b/articles/storage/common/storage-migration-to-premium-storage.md @@ -249,7 +249,7 @@ Now that you have your VHD in the local directory, you can use AzCopy or AzurePo Add-AzureVhd [-Destination] [-LocalFilePath] ``` -An example might be ***"https://storagesample.blob.core.windows.net/mycontainer/blob1.vhd"***. An example might be ***"C:\path\to\upload.vhd"***. +An example \ might be ***"https://storagesample.blob.core.windows.net/mycontainer/blob1.vhd"***. An example \ might be ***"C:\path\to\upload.vhd"***. ##### Option 2: Using AzCopy to upload the .vhd file Using AzCopy, you can easily upload the VHD over the Internet. Depending on the size of the VHDs, this may take time. Remember to check the storage account ingress/egress limits when using this option. See [Azure Storage Scalability and Performance Targets](storage-scalability-targets.md) for details. diff --git a/articles/storsimple/storsimple-8000-support-options.md b/articles/storsimple/storsimple-8000-support-options.md index bf66f5d9d47cc..7daf72060b660 100644 --- a/articles/storsimple/storsimple-8000-support-options.md +++ b/articles/storsimple/storsimple-8000-support-options.md @@ -116,9 +116,9 @@ StorSimple 8000 Series Storage Arrays support is provided based on how the StorS -* * Premium coverage is not available in all locations. Contact Microsoft at SSSupOps\@microsoft.com for geographical coverage before purchasing StorSimple Premium Support.* +\*\* Premium coverage is not available in all locations. Contact Microsoft at SSSupOps\@microsoft.com for geographical coverage before purchasing StorSimple Premium Support.* -***The StorSimple appliance must be deployed in a region where the customer is covered by Premier support in order to be eligible for a free upgrade to premium StorSimple support.* +\*\*\*The StorSimple appliance must be deployed in a region where the customer is covered by Premier support in order to be eligible for a free upgrade to premium StorSimple support.* ASAP+ customers can switch to subscription model where standard support is included. Use the StorSimple pricing calculator for subscription pricing and contact SSSupOps@microsoft.com for any questions. Switching is one way only from ASAP+ to Subscription. diff --git a/articles/storsimple/storsimple-8000-troubleshoot-deployment.md b/articles/storsimple/storsimple-8000-troubleshoot-deployment.md index 077ecf4e5f544..8da3bb85f1ae0 100644 --- a/articles/storsimple/storsimple-8000-troubleshoot-deployment.md +++ b/articles/storsimple/storsimple-8000-troubleshoot-deployment.md @@ -78,7 +78,7 @@ The following tables list the common errors that you might encounter when you: ## Errors during the optional web proxy settings | No. | Error message | Possible causes | Recommended action | | --- | --- | --- | --- | -| 1 |Invoke-HcsSetupWizard: Invalid parameter (Exception from HRESULT: 0x80070057) |One of the parameters provided for the proxy settings is not valid. |The URI is not provided in the correct format. Use the following format: http://**:** | +| 1 |Invoke-HcsSetupWizard: Invalid parameter (Exception from HRESULT: 0x80070057) |One of the parameters provided for the proxy settings is not valid. |The URI is not provided in the correct format. Use the following format: http://*\*:*\* | | 2 |Invoke-HcsSetupWizard: RPC server not available (Exception from HRESULT: 0x800706ba) |The root cause is one of the following:
  1. The cluster is not up.
  2. The passive controller cannot communicate with the active controller, and the command is run from passive controller.
|Depending on the root cause:
  1. [Contact Microsoft Support](storsimple-8000-contact-microsoft-support.md) to make sure that the cluster is up.
  2. Run the command from the active controller. If you want to run the command from the passive controller, you will need to ensure that the passive controller can communicate with the active controller. You will need to [contact Microsoft Support](storsimple-8000-contact-microsoft-support.md) if this connectivity is broken.
| | 3 |Invoke-HcsSetupWizard: RPC call failed (Exception from HRESULT: 0x800706be) |Cluster is down. |[Contact Microsoft Support](storsimple-8000-contact-microsoft-support.md) to make sure that the cluster is up. | | 4 |Invoke-HcsSetupWizard: Cluster resource not found (Exception from HRESULT: 0x8007138f) |The cluster resource is not found. This can happen when the installation was not correct. |You may need to reset the device to the factory default settings. [Contact Microsoft Support](storsimple-8000-contact-microsoft-support.md) to create a cluster resource. | diff --git a/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md b/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md index 4b449f4ab462a..9cd56bb1a8128 100644 --- a/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md +++ b/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md @@ -39,7 +39,7 @@ The format of any UDF package has the path `/UserCustomCode/CLR/*`. Dynamic Link |dateTime | dateTime | |struct | IRecord | |object | IRecord | -|Array | IArray | +|Array\ | IArray | |dictionary | IRecord | ## CodeBehind diff --git a/articles/time-series-insights/time-series-insights-parameterized-urls.md b/articles/time-series-insights/time-series-insights-parameterized-urls.md index 1055be432ea30..0cd4086260ec6 100644 --- a/articles/time-series-insights/time-series-insights-parameterized-urls.md +++ b/articles/time-series-insights/time-series-insights-parameterized-urls.md @@ -58,13 +58,13 @@ Accepted values correspond to the Time Series Insights explorer **quick time** m The `timeSeriesDefinitions=` parameter specifies the terms of a Time Series Insights view, where: -- "name":"" +- "name":"\" - The name of the *term*. -- "splitBy":"" +- "splitBy":"\" - The column name to *split by*. -- "measureName":"" +- "measureName":"\" - The column name of *measure*. -- "predicate":"" +- "predicate":"\" - The *where* clause for server-side filtering. - "useSum":"true" - This is an optional parameter that specifies using sum for your measure. Note, if "Events" is the selected measure, count is selected by default. If "Events" is not selected, average is selected by default. diff --git a/articles/virtual-machines/azure-cli-arm-commands.md b/articles/virtual-machines/azure-cli-arm-commands.md index e333a2e9fc1bc..99109617be8d5 100644 --- a/articles/virtual-machines/azure-cli-arm-commands.md +++ b/articles/virtual-machines/azure-cli-arm-commands.md @@ -966,6 +966,7 @@ Parameter options: -s, --subscription the subscription identifier
+ network lb address-pool delete [options] Removes the backend IP pool range resource from load balancer. @@ -1330,6 +1331,7 @@ Parameter options: -s, --subscription the subscription identifier
+ network public-ip list [options] Lists all public IP resources within a resource group. @@ -1351,7 +1353,9 @@ Parameter options: --json use json output -g, --resource-group the name of the resource group -s, --subscription the subscription identifier +
+ network public-ip show [options] Displays public ip properties for a public ip resource within a resource group. diff --git a/articles/virtual-machines/extensions/custom-script-linux.md b/articles/virtual-machines/extensions/custom-script-linux.md index b6906c59ae999..9b2be96674d3d 100644 --- a/articles/virtual-machines/extensions/custom-script-linux.md +++ b/articles/virtual-machines/extensions/custom-script-linux.md @@ -110,7 +110,7 @@ These items should be treated as sensitive data and specified in the extensions | type | CustomScript | string | | typeHandlerVersion | 2.0 | int | | fileUris (e.g) | https://github.com/MyProject/Archive/MyPythonScript.py | array | -| commandToExecute (e.g) | python MyPythonScript.py | string | +| commandToExecute (e.g) | python MyPythonScript.py \ | string | | script | IyEvYmluL3NoCmVjaG8gIlVwZGF0aW5nIHBhY2thZ2VzIC4uLiIKYXB0IHVwZGF0ZQphcHQgdXBncmFkZSAteQo= | string | | skipDos2Unix (e.g) | false | boolean | | timestamp (e.g) | 123456789 | 32-bit integer | diff --git a/articles/virtual-machines/extensions/features-linux.md b/articles/virtual-machines/extensions/features-linux.md index 6d2df80251d08..3da35b6922e2e 100644 --- a/articles/virtual-machines/extensions/features-linux.md +++ b/articles/virtual-machines/extensions/features-linux.md @@ -333,7 +333,7 @@ The following troubleshooting steps apply to all VM extensions. 1. To check the Linux Agent Log, look at the activity when your extension was being provisioned in */var/log/waagent.log* -2. Check the actual extension logs for more details in */var/log/azure/* +2. Check the actual extension logs for more details in */var/log/azure/\* 3. Check extension-specific documentation troubleshooting sections for error codes, known issues etc. diff --git a/articles/virtual-machines/linux/create-upload-generic.md b/articles/virtual-machines/linux/create-upload-generic.md index abc37a1abf725..a858da259cf89 100644 --- a/articles/virtual-machines/linux/create-upload-generic.md +++ b/articles/virtual-machines/linux/create-upload-generic.md @@ -70,7 +70,7 @@ The mechanism for rebuilding the initrd or initramfs image may vary depending on ### Resizing VHDs VHD images on Azure must have a virtual size aligned to 1 MB. Typically, VHDs created using Hyper-V are aligned correctly. If the VHD isn't aligned correctly, you may receive an error message similar to the following when you try to create an image from your VHD. -* The VHD http://.blob.core.windows.net/vhds/MyLinuxVM.vhd has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs). +* The VHD `http://.blob.core.windows.net/vhds/MyLinuxVM.vhd` has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs). In this case, resize the VM using either the Hyper-V Manager console or the [Resize-VHD](https://technet.microsoft.com/library/hh848535.aspx) PowerShell cmdlet. If you aren't running in a Windows environment, we recommend using `qemu-img` to convert (if needed) and resize the VHD. @@ -168,13 +168,13 @@ The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin ``` Graphical and quiet boot isn't useful in a cloud environment, where we want all logs sent to the serial port. The `crashkernel` option may be left configured if needed, but note that this parameter reduces the amount of available memory in the VM by at least 128 MB, which may be problematic for smaller VM sizes. -1. Install the Azure Linux Agent. +1. Install the Azure Linux Agent. The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the agent as an RPM or Deb package (the package is typically called WALinuxAgent or walinuxagent). The agent can also be installed manually by following the steps in the [Linux Agent Guide](../extensions/agent-linux.md). -1. Ensure that the SSH server is installed, and configured to start at boot time. This configuration is usually the default. +1. Ensure that the SSH server is installed, and configured to start at boot time. This configuration is usually the default. -1. Don't create swap space on the OS disk. +1. Don't create swap space on the OS disk. The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (step 2 above), modify the following parameters in /etc/waagent.conf as needed. ``` @@ -184,15 +184,15 @@ The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2048 ## NOTE: Set this to your desired size. ``` -1. Run the following commands to deprovision the virtual machine. +1. Run the following commands to deprovision the virtual machine. ``` - sudo waagent -force -deprovision - export HISTSIZE=0 - logout + sudo waagent -force -deprovision + export HISTSIZE=0 + logout ``` - > [!NOTE] - > On Virtualbox you may see the following error after running `waagent -force -deprovision` that says `[Errno 5] Input/output error`. This error message is not critical and can be ignored. + > [!NOTE] + > On Virtualbox you may see the following error after running `waagent -force -deprovision` that says `[Errno 5] Input/output error`. This error message is not critical and can be ignored. * Shut down the virtual machine and upload the VHD to Azure. diff --git a/articles/virtual-machines/linux/tutorial-govern-resources.md b/articles/virtual-machines/linux/tutorial-govern-resources.md index a6db8fafa78ef..858d9fd8c1cfd 100644 --- a/articles/virtual-machines/linux/tutorial-govern-resources.md +++ b/articles/virtual-machines/linux/tutorial-govern-resources.md @@ -63,7 +63,7 @@ adgroupId=$(az ad group show --group --query objectId --output az role assignment create --assignee-object-id $adgroupId --role "Virtual Machine Contributor" --resource-group myResourceGroup ``` -If you receive an error stating **Principal does not exist in the directory**, the new group hasn't propagated throughout Azure Active Directory. Try running the command again. +If you receive an error stating **Principal \ does not exist in the directory**, the new group hasn't propagated throughout Azure Active Directory. Try running the command again. Typically, you repeat the process for *Network Contributor* and *Storage Account Contributor* to make sure users are assigned to manage the deployed resources. In this article, you can skip those steps. diff --git a/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md b/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md index 362ab73e6d945..e80957bc3035b 100644 --- a/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md +++ b/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md @@ -139,7 +139,7 @@ To resolve this issue, install the SQL IaaS extension before attempting to regis > Installing the SQL IaaS extension will restart the SQL Server service and should only be done during a maintenance window. For more information, see [SQL IaaS Extension installation](https://docs.microsoft.com/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-server-agent-extension#installation). -### The Resource 'Microsoft.SqlVirtualMachine/SqlVirtualMachines/' under resource group '' was not found. The property 'sqlServerLicenseType' cannot be found on this object. Verify that the property exists and can be set. +### The Resource 'Microsoft.SqlVirtualMachine/SqlVirtualMachines/\' under resource group '\' was not found. The property 'sqlServerLicenseType' cannot be found on this object. Verify that the property exists and can be set. This error occurs when attempting to change the licensing model on a SQL Server VM that has not been registered with the SQL resource provider. You'll need to register the resource provider to your [subscription](#register-sql-vm-resource-provider-with-subscription), and then register your SQL Server VM with the SQL [resource provider](#register-sql-server-vm-with-sql-resource-provider). ## Next steps diff --git a/articles/virtual-machines/workloads/sap/dbms-guide-ha-ibm.md b/articles/virtual-machines/workloads/sap/dbms-guide-ha-ibm.md index c47b1f46922fa..d18cc13da6b0c 100644 --- a/articles/virtual-machines/workloads/sap/dbms-guide-ha-ibm.md +++ b/articles/virtual-machines/workloads/sap/dbms-guide-ha-ibm.md @@ -480,7 +480,7 @@ Use the J2EE Config tool to check or update the JDBC URL. The the J2EE Config to 1. Sign in to primary application server of J2EE instance and execute:
sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
2. In the left frame, choose security store. -2. In the right frame, choose the key jdbc/pool//url. +2. In the right frame, choose the key jdbc/pool/\/url. 2. Change the host name in the JDBC URL to the virtual host name
jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0
5. Choose Add. @@ -571,9 +571,9 @@ crm resource clear msl_Db2_db2ptr_PTR -- crm resource migrate - creates location constraints and can cause issues with takeover -- crm resource clear - clears location constraints -- crm resource cleanup - clears all errors of the resource +- crm resource migrate \ \ - creates location constraints and can cause issues with takeover +- crm resource clear \ - clears location constraints +- crm resource cleanup \ - clears all errors of the resource ### Test the fencing agent diff --git a/articles/virtual-network/virtual-network-troubleshoot-nva.md b/articles/virtual-network/virtual-network-troubleshoot-nva.md index d6d36924a466a..d31a7164e1644 100644 --- a/articles/virtual-network/virtual-network-troubleshoot-nva.md +++ b/articles/virtual-network/virtual-network-troubleshoot-nva.md @@ -70,12 +70,14 @@ Use PowerShell 3. Check the **EnableIPForwarding** property. 4. If IP forwarding is not enabled, run the following commands to enable it: + ```powershell $nic2 = Get-AzNetworkInterface -ResourceGroupName -Name $nic2.EnableIPForwarding = 1 Set-AzNetworkInterface -NetworkInterface $nic2 Execute: $nic2 #and check for an expected output: EnableIPForwarding : True NetworkSecurityGroup : null + ``` **Check for NSG when using Standard SKU Pubilc IP** When using a Standard SKU and Public IPs, there must be an NSG created and an explicit rule to allow the traffic to the NVA. diff --git a/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md b/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md index 686c8a2a50b57..688c7619af3db 100644 --- a/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md +++ b/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md @@ -98,8 +98,10 @@ Check for and remove user-defined routing (UDR) or Network Security Groups (NSGs 2. Click through the certificate warning. 3. If you receive a response, the VPN gateway is considered healthy. If you don't receive a response, the gateway might not be healthy or an NSG on the gateway subnet is causing the problem. The following text is a sample response: - <?xml version="1.0"?> - Primary Instance: GatewayTenantWorker_IN_1 GatewayTenantVersion: 14.7.24.6 + Primary Instance: GatewayTenantWorker_IN_1 GatewayTenantVersion: 14.7.24.6 + ``` ### Step 8. Check whether the on-premises VPN device has the perfect forward secrecy feature enabled diff --git a/includes/expressroute-global-reach-faq-include.md b/includes/expressroute-global-reach-faq-include.md index 6be4d88af07ed..1b8d22a005cdb 100644 --- a/includes/expressroute-global-reach-faq-include.md +++ b/includes/expressroute-global-reach-faq-include.md @@ -23,7 +23,7 @@ If your ExpressRoute circuits are in the same geopolitical region, you don't nee ### How will I be charged for ExpressRoute Global Reach? -ExpressRoute enables connectivity from your on-premises network to Microsoft cloud services. ExpressRoute Global Reach enables connectivity between your own on-premises networks via your existing ExpressRoute circuits, leveraging Microsoft's global network. ExpressRoute Global Reach is billed separately from the existing ExpressRoute service. There is an Add-on fee for enabling this feature on each ExpressRoute circuit. Traffic between your on-premises networks enabled by ExpressRoute Global Reach will be billed for an egress rate at the source and for an ingress rate at the destination. The rates are based on the zone at which the circuits are located. See +ExpressRoute enables connectivity from your on-premises network to Microsoft cloud services. ExpressRoute Global Reach enables connectivity between your own on-premises networks via your existing ExpressRoute circuits, leveraging Microsoft's global network. ExpressRoute Global Reach is billed separately from the existing ExpressRoute service. There is an Add-on fee for enabling this feature on each ExpressRoute circuit. Traffic between your on-premises networks enabled by ExpressRoute Global Reach will be billed for an egress rate at the source and for an ingress rate at the destination. The rates are based on the zone at which the circuits are located. ### Where is ExpressRoute Global Reach supported? diff --git a/markdown templates/virtual-machines-ps-template-compare-sm-arm-task.md b/markdown templates/virtual-machines-ps-template-compare-sm-arm-task.md index 64eb5d9761997..c59d8b44c2757 100644 --- a/markdown templates/virtual-machines-ps-template-compare-sm-arm-task.md +++ b/markdown templates/virtual-machines-ps-template-compare-sm-arm-task.md @@ -36,7 +36,7 @@ Then, use the following syntax to add a reference to the image in your article: These command examples use the following variables: -$FriendlyName"" +$FriendlyName"\"