Skip to content

Commit c27c8ac

Browse files
Clean up some of the linting warnings and errors (#2155)
* Clean up some of the linting warnings and errors * Additional linting warning and error cleanup * More work on removing linting errors * More linting cleanup * Even more linting warning cleanup * Fix links to components * Fix link syntax in topic * Correct reference to AWS X-Ray * Add missing link in collect topic * Fix up some redirected links and minor syntax fixes * Fix typo in file name * Apply suggestions from code review Co-authored-by: Beverly Buchanan <[email protected]> --------- Co-authored-by: Beverly Buchanan <[email protected]>
1 parent a319983 commit c27c8ac

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+546
-568
lines changed

docs/sources/collect/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,4 @@ weight: 100
88

99
# Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}}
1010

11-
{{< section >}}
11+
{{< section >}}

docs/sources/collect/choose-component.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want
1717
## Metrics for infrastructure
1818

1919
Use `prometheus.*` components to collect infrastructure metrics.
20-
This will give you the best experience with [Grafana Infrastructure Observability][].
20+
This gives you the best experience with [Grafana Infrastructure Observability][].
2121

22-
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
23-
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
22+
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
2423

2524
You can also scrape any Prometheus endpoint using `prometheus.scrape`.
2625
Use `discovery.*` components to find targets for `prometheus.scrape`.
@@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`.
3029
## Metrics for applications
3130

3231
Use `otelcol.receiver.*` components to collect application metrics.
33-
This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
32+
This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
3433

3534
For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications.
3635

@@ -48,12 +47,12 @@ with logs collected by `loki.*` components.
4847

4948
For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`.
5049
On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`,
51-
which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem.
50+
which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem.
5251

5352
## Logs from applications
5453

5554
Use `otelcol.receiver.*` components to collect application logs.
56-
This will gather the application logs in an OpenTelemetry-native way, making it easier to
55+
This gathers the application logs in an OpenTelemetry-native way, making it easier to
5756
correlate the logs with OpenTelemetry metrics and traces coming from the application.
5857
All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation.
5958

@@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri
6564

6665
Use `otelcol.receiver.*` components to collect traces.
6766

68-
If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
67+
If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
6968

7069
## Profiles
7170

docs/sources/collect/datadog-traces-metrics.md

Lines changed: 20 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ This topic describes how to:
2020

2121
## Before you begin
2222

23-
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
24-
* Identify where you will write the collected telemetry.
25-
Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
23+
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
24+
* Identify where to write the collected telemetry.
25+
Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
2626
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
2727
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.
2828

@@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
4545

4646
Replace the following:
4747

48-
- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
48+
* _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
4949

5050
1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.
5151

@@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
5858

5959
Replace the following:
6060

61-
- _`<USERNAME>`_: The basic authentication username.
62-
- _`<PASSWORD>`_: The basic authentication password or API key.
61+
* _`<USERNAME>`_: The basic authentication username.
62+
* _`<PASSWORD>`_: The basic authentication password or API key.
6363

6464
## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver
6565

@@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
7878

7979
```alloy
8080
otelcol.processor.deltatocumulative "default" {
81-
max_stale = <MAX_STALE>
81+
max_stale = "<MAX_STALE>"
8282
max_streams = <MAX_STREAMS>
8383
output {
8484
metrics = [otelcol.processor.batch.default.input]
@@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
8888

8989
Replace the following:
9090

91-
- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
92-
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
91+
* _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
92+
* _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
9393

9494
1. Add the following `otelcol.receiver.datadog` component to your configuration file.
9595

9696
```alloy
9797
otelcol.receiver.datadog "default" {
98-
endpoint = <HOST>:<PORT>
98+
endpoint = "<HOST>:<PORT>"
9999
output {
100100
metrics = [otelcol.processor.deltatocumulative.default.input]
101101
traces = [otelcol.processor.batch.default.input]
@@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
105105

106106
Replace the following:
107107

108-
- _`<HOST>`_: The host address where the receiver will listen.
109-
- _`<PORT>`_: The port where the receiver will listen.
108+
* _`<HOST>`_: The host address where the receiver listens.
109+
* _`<PORT>`_: The port where the receiver listens.
110110

111111
1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.
112112

@@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data
119119

120120
Replace the following:
121121

122-
- _`<USERNAME>`_: The basic authentication username.
123-
- _`<PASSWORD>`_: The basic authentication password or API key.
122+
* _`<USERNAME>`_: The basic authentication username.
123+
* _`<PASSWORD>`_: The basic authentication password or API key.
124124

125125
## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver
126126

@@ -139,19 +139,19 @@ We recommend this approach for current Datadog users who want to try using {{< p
139139

140140
Replace the following:
141141

142-
- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
143-
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
142+
* _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
143+
* _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
144144

145-
Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
145+
Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
146146
You can do this by setting up your Datadog Agent in the following way:
147147

148148
1. Replace the DD_URL in the configuration YAML:
149149

150150
```yaml
151151
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
152152
```
153-
Or by setting an environment variable:
154153
154+
Or by setting an environment variable:
155155
156156
```bash
157157
DD_DD_URL='{"http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>": ["datadog-receiver"]}'
@@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit
169169
[Datadog]: https://www.datadoghq.com/
170170
[Datadog Agent]: https://docs.datadoghq.com/agent/
171171
[Prometheus]: https://prometheus.io
172-
[OTLP]: https://opentelemetry.io/docs/specs/otlp/
173-
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
174-
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
175-
[Components]: ../../get-started/components
172+
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/
173+
[Components]: ../../get-started/components/

docs/sources/collect/ecs-openteletry-data.md renamed to docs/sources/collect/ecs-opentelemetry-data.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
---
22
canonical: https://grafana.com/docs/alloy/latest/collect/ecs-opentelemetry-data/
3+
alias:
4+
- ./ecs-openteletry-data/ # /docs/alloy/latest/collect/ecs-openteletry-data/
35
description: Learn how to collect Amazon ECS or AWS Fargate OpenTelemetry data and forward it to any OpenTelemetry-compatible endpoint
46
menuTitle: Collect ECS or Fargate OpenTelemetry data
57
title: Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data
@@ -14,7 +16,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle
1416

1517
1. [Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store).
1618
1. [Create an ECS task definition](#create-an-ecs-task-definition).
17-
1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar).
19+
1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar)
1820

1921
## Before you begin
2022

@@ -55,11 +57,11 @@ In ECS, you can set the values of environment variables from AWS Systems Manager
5557
1. Choose *Create parameter*.
5658
1. Create a parameter with the following values:
5759

58-
* `Name`: otel-collector-config
59-
* `Tier`: Standard
60-
* `Type`: String
61-
* `Data type`: Text
62-
* `Value`: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
60+
* Name: `otel-collector-config`
61+
* Tier: `Standard`
62+
* Type: `String`
63+
* Data type: `Text`
64+
* Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
6365

6466
### Run your task
6567

@@ -73,16 +75,16 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet
7375

7476
1. Download the [ECS Fargate task definition template][template] from GitHub.
7577
1. Edit the task definition template and add the following parameters.
76-
* `{{region}}`: The region the data is sent to.
78+
* `{{region}}`: The region to send the data to.
7779
* `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN.
7880
* `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
7981
* `command` - Assign a value to the command variable to select the path to the configuration file.
8082
The AWS Collector comes with two configurations. Select one of them based on your environment:
81-
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and X-Ray SDK traces.
82-
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics.
83+
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces.
84+
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
8385
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.
8486

85-
## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar
87+
## Run Alloy directly in your instance, or as a Kubernetes sidecar
8688

8789
SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate.
8890

0 commit comments

Comments
 (0)