We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hi team,
we are facing issue of traces drop at lambda execution, where traces dropped from lambda function.
error from cloudwatch:
{ "level": "error", "ts": 1692968962.3226757, "caller": "exporterhelper/queued_retry.go:296", "msg": "Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures.", "kind": "exporter", "data_type": "traces", "name": "otlp", "dropped_items": 4, "stacktrace": "go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send\n\tgo.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:296\ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2\n\tgo.opentelemetry.io/[email protected]/exporter/exporterhelper/traces.go:116\ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces\n\tgo.opentelemetry.io/collector/[email protected]/traces.go:36\ngo.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces\n\tgo.opentelemetry.io/[email protected]/service/internal/fanoutconsumer/traces.go:77\ngo.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export\n\tgo.opentelemetry.io/collector/receiver/[email protected]/internal/trace/otlp.go:54\ngo.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.rawTracesServer.Export\n\tgo.opentelemetry.io/collector/[email protected]/ptrace/ptraceotlp/grpc.go:72\ngo.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1\n\tgo.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:310\ngo.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1\n\tgo.opentelemetry.io/[email protected]/config/configgrpc/configgrpc.go:410\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1.1\n\tgoogle.golang.org/[email protected]/server.go:1162\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1\n\tgo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:349\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1.1\n\tgoogle.golang.org/[email protected]/server.go:1165\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\tgoogle.golang.org/[email protected]/server.go:1167\ngo.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler\n\tgo.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:312\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoogle.golang.org/[email protected]/server.go:1340\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/[email protected]/server.go:1713\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\tgoogle.golang.org/[email protected]/server.go:965" }
due to droppage we see issues in traces.
code details: "OpenTelemetry.Instrumentation.AWSLambda" Version="1.1.0-beta.2" />
otel config: ` receivers: otlp: protocols: grpc: http:
exporters: logging: loglevel: debug otlp: endpoint: "grpc endpoint" retry_on_failure: initial_interval: 1s max_interval: 5s sending_queue: queue_size: 2000 timeout: 5s #enables output for traces to xray service: pipelines: traces: receivers: [otlp] exporters: [logging, otlp] `
let me know, if u need more info.
thanks.
The text was updated successfully, but these errors were encountered:
This issue was marked stale. It will be closed in 30 days without additional activity.
Sorry, something went wrong.
@kadhamecha-conga It seems that the problem is related to your GRPC endpoint where the collector is exported.
@serkan-ozal grpc endpoint is
otlp/1: endpoint: signals-grpc.demo.congacloud.app:443 tls: insecure: true
can you please help ?
@kadhamecha-conga You use the port 443 but disable secure communication with insecure: true parameter. Can you try by removing insecure parameter?
insecure: true
insecure
No branches or pull requests
hi team,
we are facing issue of traces drop at lambda execution, where traces dropped from lambda function.
error from cloudwatch:
{ "level": "error", "ts": 1692968962.3226757, "caller": "exporterhelper/queued_retry.go:296", "msg": "Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures.", "kind": "exporter", "data_type": "traces", "name": "otlp", "dropped_items": 4, "stacktrace": "go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send\n\tgo.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:296\ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2\n\tgo.opentelemetry.io/[email protected]/exporter/exporterhelper/traces.go:116\ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces\n\tgo.opentelemetry.io/collector/[email protected]/traces.go:36\ngo.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces\n\tgo.opentelemetry.io/[email protected]/service/internal/fanoutconsumer/traces.go:77\ngo.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export\n\tgo.opentelemetry.io/collector/receiver/[email protected]/internal/trace/otlp.go:54\ngo.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.rawTracesServer.Export\n\tgo.opentelemetry.io/collector/[email protected]/ptrace/ptraceotlp/grpc.go:72\ngo.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1\n\tgo.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:310\ngo.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1\n\tgo.opentelemetry.io/[email protected]/config/configgrpc/configgrpc.go:410\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1.1\n\tgoogle.golang.org/[email protected]/server.go:1162\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1\n\tgo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:349\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1.1\n\tgoogle.golang.org/[email protected]/server.go:1165\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\tgoogle.golang.org/[email protected]/server.go:1167\ngo.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler\n\tgo.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/trace/v1/trace_service.pb.go:312\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoogle.golang.org/[email protected]/server.go:1340\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/[email protected]/server.go:1713\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\tgoogle.golang.org/[email protected]/server.go:965" }
due to droppage we see issues in traces.
code details:
"OpenTelemetry.Instrumentation.AWSLambda"
Version="1.1.0-beta.2" />
otel config:
`
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logging:
loglevel: debug
otlp:
endpoint: "grpc endpoint"
retry_on_failure:
initial_interval: 1s
max_interval: 5s
sending_queue:
queue_size: 2000
timeout: 5s
#enables output for traces to xray
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, otlp]
`
let me know, if u need more info.
thanks.
The text was updated successfully, but these errors were encountered: