Replies: 3 comments 1 reply
-
Why do you ask us when you face an OTEL error? I think you ask the wrong people. |
Beta Was this translation helpful? Give feedback.
0 replies
-
sorry,😂😂😂😂😂😂 |
Beta Was this translation helpful? Give feedback.
0 replies
-
After testing, it was found that there were no errors when deploying OAP to version 9.7, which is normal. However, this issue is present in version 10.0 of OAP. Is it an OAP issue? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
skywalking version 10
An error occurred during the process of reporting metrics to Skywalking using kube state metrics in k8s. Please check the pod/otel agent log for prompts:
2024-05-30T12:27:07.315Z info kubernetes/kubernetes.go:327 Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "discovery": "kubernetes", "config": "kubernetes-cadvisor"}
2024-05-30T12:27:07.315Z info kubernetes/kubernetes.go:327 Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "discovery": "kubernetes", "config": "kube-state-metrics"}
2024-05-30T12:27:07.315Z info service/service.go:161 Everything is ready. Begin running and processing data.
2024-05-30T12:27:07.320Z info [email protected]/metrics_receiver.go:239 Starting discovery manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-05-30T12:27:07.320Z info [email protected]/metrics_receiver.go:280 Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-05-30T12:27:21.829Z info exporterhelper/queued_retry.go:423 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "rpc error: code = DeadlineExceeded desc = context deadline exceeded", "interval": "5.774080124s"}
2024-05-30T12:27:27.314Z warn zapgrpc/zapgrpc.go:195 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "otel-collector.skywalking.svc.cluster.local:4317", ServerName: "otel-collector.skywalking.svc.cluster.local:4317", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.27.89.45:4317: i/o timeout" {"grpc_log": true}
2024-05-30T12:27:30.205Z error exporterhelper/queued_retry.go:391 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "Permanent error: rpc error: code = ResourceExhausted desc = grpc: received message after decompression larger than max (16242618 vs. 4194304)", "dropped_items": 12862}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:391
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send
go.opentelemetry.io/collector/[email protected]/exporterhelper/metrics.go:125
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:195
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1
go.opentelemetry.io/collector/[email protected]/exporterhelper/internal/bounded_memory_queue.go:47
Here are the deployed files
Beta Was this translation helpful? Give feedback.
All reactions