You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Clean up some of the linting warnings and errors (#2155)
* Clean up some of the linting warnings and errors
* Additional linting warning and error cleanup
* More work on removing linting errors
* More linting cleanup
* Even more linting warning cleanup
* Fix links to components
* Fix link syntax in topic
* Correct reference to AWS X-Ray
* Add missing link in collect topic
* Fix up some redirected links and minor syntax fixes
* Fix typo in file name
* Apply suggestions from code review
Co-authored-by: Beverly Buchanan <[email protected]>
---------
Co-authored-by: Beverly Buchanan <[email protected]>
Copy file name to clipboardExpand all lines: docs/sources/collect/choose-component.md
+6-7Lines changed: 6 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want
17
17
## Metrics for infrastructure
18
18
19
19
Use `prometheus.*` components to collect infrastructure metrics.
20
-
This will give you the best experience with [Grafana Infrastructure Observability][].
20
+
This gives you the best experience with [Grafana Infrastructure Observability][].
21
21
22
-
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
23
-
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
22
+
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
24
23
25
24
You can also scrape any Prometheus endpoint using `prometheus.scrape`.
26
25
Use `discovery.*` components to find targets for `prometheus.scrape`.
@@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`.
30
29
## Metrics for applications
31
30
32
31
Use `otelcol.receiver.*` components to collect application metrics.
33
-
This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
32
+
This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
34
33
35
34
For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications.
36
35
@@ -48,12 +47,12 @@ with logs collected by `loki.*` components.
48
47
49
48
For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`.
50
49
On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`,
51
-
which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem.
50
+
which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem.
52
51
53
52
## Logs from applications
54
53
55
54
Use `otelcol.receiver.*` components to collect application logs.
56
-
This will gather the application logs in an OpenTelemetry-native way, making it easier to
55
+
This gathers the application logs in an OpenTelemetry-native way, making it easier to
57
56
correlate the logs with OpenTelemetry metrics and traces coming from the application.
58
57
All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation.
59
58
@@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri
65
64
66
65
Use `otelcol.receiver.*` components to collect traces.
67
66
68
-
If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
67
+
If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
Copy file name to clipboardExpand all lines: docs/sources/collect/datadog-traces-metrics.md
+20-22Lines changed: 20 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,9 +20,9 @@ This topic describes how to:
20
20
21
21
## Before you begin
22
22
23
-
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
24
-
* Identify where you will write the collected telemetry.
25
-
Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
23
+
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
24
+
* Identify where to write the collected telemetry.
25
+
Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
26
26
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
27
27
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
28
28
@@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
45
45
46
46
Replace the following:
47
47
48
-
-_`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
48
+
*_`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
49
49
50
50
1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.
51
51
@@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
58
58
59
59
Replace the following:
60
60
61
-
-_`<USERNAME>`_: The basic authentication username.
62
-
-_`<PASSWORD>`_: The basic authentication password or API key.
61
+
*_`<USERNAME>`_: The basic authentication username.
62
+
*_`<PASSWORD>`_: The basic authentication password or API key.
63
63
64
64
## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver
65
65
@@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
78
78
79
79
```alloy
80
80
otelcol.processor.deltatocumulative "default" {
81
-
max_stale = “<MAX_STALE>”
81
+
max_stale = "<MAX_STALE>"
82
82
max_streams = <MAX_STREAMS>
83
83
output {
84
84
metrics = [otelcol.processor.batch.default.input]
@@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
88
88
89
89
Replace the following:
90
90
91
-
-_`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
92
-
-_`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
91
+
*_`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
92
+
*_`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
93
93
94
94
1. Add the following `otelcol.receiver.datadog` component to your configuration file.
description: Learn how to collect Amazon ECS or AWS Fargate OpenTelemetry data and forward it to any OpenTelemetry-compatible endpoint
4
6
menuTitle: Collect ECS or Fargate OpenTelemetry data
5
7
title: Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data
@@ -14,7 +16,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle
14
16
15
17
1.[Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store).
16
18
1.[Create an ECS task definition](#create-an-ecs-task-definition).
17
-
1.[Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar).
19
+
1.[Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar)
18
20
19
21
## Before you begin
20
22
@@ -55,11 +57,11 @@ In ECS, you can set the values of environment variables from AWS Systems Manager
55
57
1. Choose *Create parameter*.
56
58
1. Create a parameter with the following values:
57
59
58
-
*`Name`: otel-collector-config
59
-
*`Tier`: Standard
60
-
*`Type`: String
61
-
*`Data type`: Text
62
-
*`Value`: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
60
+
* Name: `otel-collector-config`
61
+
* Tier: `Standard`
62
+
* Type: `String`
63
+
* Data type: `Text`
64
+
* Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
63
65
64
66
### Run your task
65
67
@@ -73,16 +75,16 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet
73
75
74
76
1. Download the [ECS Fargate task definition template][template] from GitHub.
75
77
1. Edit the task definition template and add the following parameters.
76
-
*`{{region}}`: The region the data is sent to.
78
+
*`{{region}}`: The region to send the data to.
77
79
*`{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN.
78
80
*`{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
79
81
*`command` - Assign a value to the command variable to select the path to the configuration file.
80
82
The AWS Collector comes with two configurations. Select one of them based on your environment:
81
-
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and X-Ray SDK traces.
82
-
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics.
83
+
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces.
84
+
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
83
85
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.
84
86
85
-
## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar
87
+
## Run Alloy directly in your instance, or as a Kubernetes sidecar
86
88
87
89
SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate.
0 commit comments