I'm facing the following error logs:
time=2025-08-25T11:21:32.053+01:00 level=WARN msg="[non-fatal] Tracing: request failed" error="Post \"https://api.openai.com/v1/traces/ingest\": context canceled"
time=2025-08-25T11:21:33.121+01:00 level=WARN msg="[non-fatal] Tracing: request failed" error="Post \"https://api.openai.com/v1/traces/ingest\": context canceled"
time=2025-08-25T11:21:35.149+01:00 level=WARN msg="[non-fatal] Tracing: request failed" error="Post \"https://api.openai.com/v1/traces/ingest\": context canceled"
I start an agent as part of an HTTP request, the agent finishes and returns the response (using a stream). It seems even when passing a background context into agents.RunStreamedChan(context.Background(), agent, userRequest) this is still happening. So I'm wondering where the context for the tracing client is coming from and how I can fix this (I don't want to disable traces)?
NIT: I guess we don't have to retry if context was cancelled on the first attempt already.
I'm facing the following error logs:
I start an agent as part of an HTTP request, the agent finishes and returns the response (using a stream). It seems even when passing a background context into
agents.RunStreamedChan(context.Background(), agent, userRequest)this is still happening. So I'm wondering where the context for the tracing client is coming from and how I can fix this (I don't want to disable traces)?NIT: I guess we don't have to retry if context was cancelled on the first attempt already.