Skip to content

Commit

Permalink
Merge branch 'master' into guacbot/translation-pipeline
Browse files Browse the repository at this point in the history
  • Loading branch information
michaelcretzman authored Jan 31, 2025
2 parents a36bc1c + b6c8b71 commit 6f1fd9a
Show file tree
Hide file tree
Showing 209 changed files with 19,197 additions and 4,685 deletions.
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ content/en/logs/log_collection/android.md @Datadog/rum-mo
content/en/logs/log_collection/flutter.md @Datadog/rum-mobile @DataDog/documentation
content/en/logs/log_collection/unity.md @Datadog/rum-mobile @DataDog/documentation
content/en/logs/log_collection/ios.md @Datadog/rum-mobile @DataDog/documentation
content/en/logs/log_collection/kotlin-multiplatform.md @Datadog/rum-mobile @DataDog/documentation
content/en/logs/log_collection/kotlin_multiplatform.md @Datadog/rum-mobile @DataDog/documentation

# Traces
content/en/tracing/trace_collection/dd_libraries/android.md @Datadog/rum-mobile @DataDog/documentation
Expand All @@ -128,7 +128,7 @@ content/en/real_user_monitoring/error_tracking/flutter.md @Datado
content/en/real_user_monitoring/error_tracking/mobile/unity.md @Datadog/rum-mobile @DataDog/documentation
content/en/real_user_monitoring/error_tracking/ios.md @Datadog/rum-mobile @DataDog/documentation
content/en/real_user_monitoring/error_tracking/reactnative.md @Datadog/rum-mobile @DataDog/documentation
content/en/real_user_monitoring/error_tracking/kotlin-multiplatform.md @Datadog/rum-mobile @DataDog/documentation
content/en/real_user_monitoring/error_tracking/kotlin_multiplatform.md @Datadog/rum-mobile @DataDog/documentation

# Browser SDK
content/en/real_user_monitoring/browser/ @Datadog/rum-browser @DataDog/documentation
Expand Down
296 changes: 198 additions & 98 deletions config/_default/menus/main.en.yaml

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion config/_default/menus/main.es.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5177,7 +5177,7 @@ menu:
- identifier: log_kotlin_multiplatform
name: Kotlin Multiplatform
parent: log_collection
url: logs/log_collection/kotlin-multiplatform/
url: logs/log_collection/kotlin_multiplatform/
weight: 107
- identifier: log_collection_csharp
name: C#
Expand Down
2 changes: 1 addition & 1 deletion content/en/administrators_guide/plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ Create a detailed roll-out methodology in the [build][41] phase by focusing on t
[48]: https://www.datadoghq.com/blog/engineering/husky-deep-dive/
[49]: /real_user_monitoring/platform/connect_rum_and_traces/?tab=browserrum
[50]: /integrations/tcp_check/?tab=host#data-collected
[51]: /tracing/guide/inferred-service-opt-in/?tab=java
[51]: /tracing/services/inferred_services
[52]: /integrations/amazon_web_services/
[53]: /integrations/google_cloud_platform/
[54]: /integrations/azure/
Expand Down
2 changes: 1 addition & 1 deletion content/en/error_tracking/frontend/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Error Tracking simplifies debugging by grouping thousands of similar errors into
{{< nextlink href="error_tracking/frontend/mobile/expo" >}}Expo{{< /nextlink >}}
{{< nextlink href="error_tracking/frontend/mobile/reactnative" >}}React Native{{< /nextlink >}}
{{< nextlink href="error_tracking/frontend/mobile/flutter" >}}Flutter{{< /nextlink >}}
{{< nextlink href="error_tracking/frontend/mobile/kotlin-multiplatform" >}}Kotlin Multiplatform{{< /nextlink >}}
{{< nextlink href="error_tracking/frontend/mobile/kotlin_multiplatform" >}}Kotlin Multiplatform{{< /nextlink >}}
{{< nextlink href="error_tracking/frontend/logs" >}}Logs{{< /nextlink >}}
{{< /whatsnext >}}

Expand Down
2 changes: 1 addition & 1 deletion content/en/error_tracking/frontend/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ If you have not setup the Datadog Kotlin Multiplatform Logs SDK yet, follow the
```

[1]: https://app.datadoghq.com/logs/onboarding/client
[2]: /logs/log_collection/kotlin-multiplatform/#setup
[2]: /logs/log_collection/kotlin_multiplatform/#setup
[3]: https://github.com/Datadog/dd-sdk-kotlin-multiplatform

{{% /tab %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
title: Kotlin Multiplatform Crash Reporting and Error Tracking
---

{{< include-markdown "real_user_monitoring/error_tracking/mobile/kotlin-multiplatform" >}}
{{< include-markdown "real_user_monitoring/error_tracking/mobile/kotlin_multiplatform" >}}
2 changes: 1 addition & 1 deletion content/en/error_tracking/rum.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ title: Error Tracking for Web and Mobile Applications
{{< nextlink href="real_user_monitoring/error_tracking/flutter" >}}Flutter{{< /nextlink >}}
{{< nextlink href="real_user_monitoring/error_tracking/unity" >}}Unity{{< /nextlink >}}
{{< nextlink href="real_user_monitoring/error_tracking/roku" >}}Roku{{< /nextlink >}}
{{< nextlink href="real_user_monitoring/error_tracking/kotlin-multiplatform" >}}Kotlin Multiplatform{{< /nextlink >}}
{{< nextlink href="real_user_monitoring/error_tracking/kotlin_multiplatform" >}}Kotlin Multiplatform{{< /nextlink >}}
{{< /whatsnext >}}
24 changes: 24 additions & 0 deletions content/en/integrations/guide/source-code-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -581,6 +581,30 @@ If you're using the GitHub integration, click **Connect to preview** on error fr

[101]: https://app.datadoghq.com/security/appsec

{{% /tab %}}
{{% tab "Dynamic Instrumentation" %}}

You can see full source code files in [**Dynamic Instrumentation**][102] when creating or editing an instrumentation (dynamic log, metric, span, or span tags).

#### Create new instrumentation

1. Navigate to [**APM** > **Dynamic Instrumentation**][101].
2. Select **Create New Instrumentation** and choose a service to instrument.
3. Search for and select a source code filename or method.

#### View or edit instrumentation

1. Navigate to [**APM** > **Dynamic Instrumentation**][101].
2. Select an existing instrumentation from the list, then click **View Events**.
3. Select the instrumentation card to view its location in the source code.

{{< img src="integrations/guide/source_code_integration/dynamic-instrumentation-create-new.png" alt="Source Code File in Dynamic Instrumentation" style="width:100%;">}}

For more information, see the [Dynamic Instrumentation documentation][102].

[101]: https://app.datadoghq.com/dynamic-instrumentation/events
[102]: /dynamic_instrumentation/

{{% /tab %}}
{{< /tabs >}}

Expand Down
2 changes: 2 additions & 0 deletions content/en/monitors/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,8 @@ A `Multi Alert` monitor triggers individual notifications for each entity in a m

For example, when setting up a monitor to notify you if the P99 latency, aggregated by service, exceeds a certain threshold, you would receive a **separate** alert for each individual service whose P99 latency exceeded the alert threshold. This can be useful for identifying and addressing specific instances of system or application issues. It allows you to track problems on a more granular level.

##### Notification grouping

When monitoring a large group of entities, multi alerts can lead to noisy monitors. To mitigate this, customize which dimensions trigger alerts. This reduces the noise and allows you to focus on the alerts that matter most. For instance, you are monitoring the average CPU usage of all your hosts. If you group your query by `service` and `host` but only want alerts to be sent once for each `service` attribute meeting the threshold, remove the `host` attribute from your multi alert options and reduce the number of notifications that are sent.

{{< img src="/monitors/create/multi-alert-aggregated.png" alt="Diagram of how notifications are sent when set to specific dimensions in multi alerts" style="width:90%;">}}
Expand Down
18 changes: 9 additions & 9 deletions content/en/monitors/types/cloud_cost.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Cloud Cost monitors are evaluated with a 48 hour delayed evaluation window, beca

To create a Cloud Cost monitor in Datadog, use the main navigation: [**Monitors** --> **New Monitor** --> **Cloud Cost**][4].

You can also create Cloud Cost monitors from the [Cloud Cost Explorer][2]. Click **More...** next to the Options button and select **Create monitor**.
You can also create Cloud Cost monitors from the [Cloud Cost Explorer][2]. Click **More...** next to the Options button and select **Create monitor**.

{{< img src="/monitors/monitor_types/cloud_cost/explorer.png" alt="Option to create a monitor from the Cloud Cost Explorer page" style="width:100%;" >}}

Expand Down Expand Up @@ -73,14 +73,14 @@ You can select from the following monitor types.

| Cost Type | Description | Usage Examples |
| --- | ----------- | ----------- |
| Cost Anomalies | Detect anomalies by comparing current costs to historical data, using a defined lookback period. Incomplete days are excluded from analysis to ensure accuracy. Anomaly monitors require at least 4 months of cloud cost data to evaluate since historical data is required to train the algorithm. | Alert if 3 days from the past 30 days show significant cost anomalies compared to historical data. |
| Cost Anomalies | Detect anomalies by comparing current costs to historical data, using a defined lookback period. Incomplete days are excluded from analysis to ensure accuracy. Anomaly monitors require at least 1 month of cloud cost data to evaluate since historical data is required to train the algorithm. | Alert if 3 days from the past 30 days show significant cost anomalies compared to historical data. |

{{% /tab %}}
{{< /tabs >}}

## Specify which costs to track

Any cost type or metric reporting to Datadog is available for monitors. You can use custom metrics or observability metrics alongside a cost metric to monitor unit economics.
Any cost type or metric reporting to Datadog is available for monitors. You can use custom metrics or observability metrics alongside a cost metric to monitor unit economics.

| Step | Required | Default | Example |
|-----------------------------------|----------|----------------------|---------------------|
Expand All @@ -89,35 +89,35 @@ Any cost type or metric reporting to Datadog is available for monitors. You can
| Group by | No | Everything | `aws_availability_zone` |
| Add observability metric | No | `system.cpu.user` | `aws.s3.all_requests` |

Use the editor to define the cost types or exports.
Use the editor to define the cost types or exports.

{{< img src="monitors/monitor_types/cloud_cost/ccm_metrics_source.png" alt="Cloud Cost and Metrics data source options for specifying which costs to track" style="width:100%;" >}}

For more information, see the [Cloud Cost Management documentation][1].
For more information, see the [Cloud Cost Management documentation][1].

## Set alert conditions

{{< tabs >}}
{{% tab "Changes" %}}

If you are using the **Cost Changes** monitor type, you can trigger an alert when the cost `increases` or `decreases` more than the defined threshold. The threshold can be set to either a **Percentage Change** or set to **Dollar Amount**.
If you are using the **Cost Changes** monitor type, you can trigger an alert when the cost `increases` or `decreases` more than the defined threshold. The threshold can be set to either a **Percentage Change** or set to **Dollar Amount**.

If you are using the **Percentage Change**, you can filter out changes that are below a certain dollar threshold. For example, the monitor alerts when there is a cost change above 5% for any change that is above $500.

{{% /tab %}}
{{% tab "Threshold" %}}

If you are using the **Cost Threshold** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, or `below or equal to` a threshold.
If you are using the **Cost Threshold** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, or `below or equal to` a threshold.

{{% /tab %}}
{{% tab "Forecast" %}}

If you are using the **Cost Forecast** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, `below or equal to`, `equal to`, or `not equal to` a threshold.
If you are using the **Cost Forecast** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, `below or equal to`, `equal to`, or `not equal to` a threshold.

{{% /tab %}}
{{% tab "Anomalies" %}}

If you are using the **Cost Anomalies** monitor type, you can trigger an alert if the observed cost deviates from historical data by being `above`, `below`, or `above or below` a threshold for any provider and service.
If you are using the **Cost Anomalies** monitor type, you can trigger an alert if the observed cost deviates from historical data by being `above`, `below`, or `above or below` a threshold for any provider and service.

The `agile` [anomaly algorithm][101] is used with two bounds and monthly seasonality.

Expand Down
61 changes: 60 additions & 1 deletion content/en/observability_pipelines/destinations/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,70 @@ Select and set up your destinations when you [set up a pipeline][1]. This is ste
{{< nextlink href="observability_pipelines/destinations/google_chronicle" >}}Google Chronicle{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/google_cloud_storage" >}}Google Cloud Storage{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/new_relic" >}}New Relic{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/microsoft_sentinel" >}}Microsoft Sentinel{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/opensearch" >}}OpenSearch{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/syslog" >}}rsyslog or syslog-ng{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/sentinelone" >}} SentinelOne {{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/splunk_hec" >}}Splunk HTTP Event Collector (HEC){{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/sumo_logic_hosted_collector" >}}Sumo Logic Hosted Collector{{< /nextlink >}}
{{< /whatsnext >}}

## Template syntax

Logs are often stored in separate indexes based on log data, such as the service or environment the logs are coming from or another log attribute. In Observability Pipelines, you can use template syntax to route your logs to different indexes based on specific log fields.

When the Observability Pipelines Worker cannot resolve the field with the template syntax, the Worker defaults to a specified behavior for that destination. For example, if you are using the template `{{application_id}}` for the Amazon S3 destination's **Prefix** field, but there isn't an `application_id` field in the log, the Worker creates a folder called `OP_UNRESOLVED_TEMPLATE_LOGS/` and publishes the logs there.

The following table lists the destinations and fields that support template syntax, and what happens when the Worker cannot resolve the field:

| Destination | Fields that support template syntax | Behavior when the field cannot be resolved |
| ----------------- | -------------------------------------| -----------------------------------------------------------------------------------------------|
| Amazon Opensearch | Index | The Worker creates an index named `datadog-op` and sends the logs there. |
| Amazon S3 | Prefix | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and sends the logs there. |
| Azure Blob | Prefix | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and sends the logs there. |
| Elasticsearch | Source type | The Worker creates an index named `datadog-op` and sends the logs there. |
| Google Chronicle | Log type | Defaults to `vector_dev` log type. |
| Google Cloud | Prefix | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and sends the logs there. |
| Opensearch | Index | The Worker creates an index named `datadog-op` and sends the logs there. |
| Splunk HEC | Index<br>Source type | The Worker sends the logs to the default index configured in Splunk. |

#### Example

If you want to route logs based on the log's application ID field (for example, `application_id`) to the Amazon S3 destination, use the event fields syntax in the **Prefix to apply to all object keys** field.

{{< img src="observability_pipelines/amazon_s3_prefix.png" alt="The Amazon S3 destination showing the prefix field using the event fields syntax /application_id={{ application_id }}/" style="width:40%;" >}}

### Syntax

#### Event fields

Use `{{ <field_name> }}` to access individual log event fields. For example:

```
{{ application_id }}
```

#### Strftime specifiers

Use [strftime specifiers][3] for the date and time. For example:

```
year=%Y/month=%m/day=%d
```

#### Escape characters

Prefix a character with `\` to escape the character. This example escapes the event field syntax:

```
\{{ field_name }}
```

This example escapes the strftime specifiers:

```
year=\%Y/month=\%m/day=\%d/
```

## Event batching

Expand All @@ -57,4 +115,5 @@ If the destination receives 3 events within 2 seconds, it flushes a batch with 2
{{% observability_pipelines/destination_batching %}}

[1]: /observability_pipelines/set_up_pipelines/
[2]: https://app.datadoghq.com/observability-pipelines
[2]: https://app.datadoghq.com/observability-pipelines
[3]: https://docs.rs/chrono/0.4.19/chrono/format/strftime/index.html#specifiers
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
title: Amazon Security Lake Destination
disable_toc: false
---

Use Observability Pipelines' Amazon Security Lake destination to send logs to Amazon Security Lake.

## Prerequisites

You need to do the following before setting up the Amazon Security Lake destination:

{{% observability_pipelines/prerequisites/amazon_security_lake %}}

## Setup

Set up the Amazon Security Lake destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

### Set up the destination

{{% observability_pipelines/destination_settings/amazon_security_lake %}}

### Set the environment variables

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/amazon_security_lake %}}

## AWS Authentication

{{% observability_pipelines/aws_authentication/amazon_security_lake/intro %}}

{{% observability_pipelines/aws_authentication/instructions %}}

### Permissions

{{% observability_pipelines/aws_authentication/amazon_security_lake/permissions %}}

## How the destination works

### Event batching

A batch of events is flushed when one of these parameters is met. See [event batching][2] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| TKTK | TKTK | TKTK |

[1]: https://app.datadoghq.com/observability-pipelines
[2]: /observability_pipelines/destinations/#event-batching
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
title: Microsoft Sentinel Destination
disable_toc: false
---

Use Observability Pipelines' Microsoft Sentinel destination to send logs to Microsoft Sentinel.

## Setup

Set up the Microsoft Sentinel destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

### Set up the destination

{{% observability_pipelines/destination_settings/microsoft_sentinel %}}

### Set the environment variables

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/microsoft_sentinel %}}

## How the destination works

### Event batching

A batch of events is flushed when one of these parameters is met. See [event batching][2] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| None | 10,000,000 | 1 |

[1]: https://app.datadoghq.com/observability-pipelines
[2]: /observability_pipelines/destinations/#event-batching
Loading

0 comments on commit 6f1fd9a

Please sign in to comment.