-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Token Usage Always Zero in Accumulation Stream Example from OpenAI Go SDK #149
Comments
Same problem, and I wrote a test for this: func Test_OpenAI(t *testing.T) {
secret := "api-secret"
client := openai.NewClient(
option.WithAPIKey(secret),
)
var message []openai.ChatCompletionMessageParamUnion
message = append(message, openai.UserMessage("hello"))
ctx := context.Background()
params := openai.ChatCompletionNewParams{
Messages: openai.F(message),
Model: openai.F(openai.ChatModelGPT3_5Turbo),
StreamOptions: openai.F(openai.ChatCompletionStreamOptionsParam{
IncludeUsage: openai.F(true),
}),
}
stream := client.Chat.Completions.NewStreaming(
ctx,
params,
)
// optionally, an accumulator helper can be used
acc := &openai.ChatCompletionAccumulator{}
for stream.Next() {
chunk := stream.Current()
acc.AddChunk(chunk)
if _, ok := acc.JustFinishedContent(); ok {
break
}
// if using tool calls
if _, ok := acc.JustFinishedToolCall(); ok {
acc.Usage = chunk.Usage
break
}
if _, ok := acc.JustFinishedRefusal(); ok {
acc.Usage = chunk.Usage
break
}
// it's best to use chunks after handling JustFinished events
if len(chunk.Choices) > 0 {
// ... write to SSE response
}
}
if err := stream.Err(); err != nil {
t.Logf("stream error: %v", err)
}
b, _ := json.MarshalIndent(v, "", " ")
fmt.Println(string(b))
} with resposne: {
"id": "xxx",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": {
"content": null,
"refusal": null
},
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
}
}
],
"created": 1736786016,
"model": "gpt-3.5-turbo-0125",
"object": "chat.completion.chunk",
"service_tier": "",
"system_fingerprint": "fp_808245b034",
"usage": {
"completion_tokens": 0,
"prompt_tokens": 0,
"total_tokens": 0,
"completion_tokens_details": {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0
},
"prompt_tokens_details": {
"audio_tokens": 0,
"cached_tokens": 0
}
}
}
|
@jacobzim-stl this is marked as completed but I am still seeing this issue, is there a workaround for this ? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello OpenAI team,
I've been using the OpenAI Go SDK and specifically running the stream example provided at
https://github.com/openai/openai-go/blob/main/examples/chat-completion-accumulating/main.go
. However, I've encountered an issue where the token usage is always reported as zero, despite the example running correctly and generating outputs.Steps to Reproduce:
examples/chat-completion-accumulating
directory.go run main.go
.Expected Behavior:
The token usage should accurately reflect the number of tokens used during the chat completions.
Actual Behavior:
The token usage is consistently reported as zero.
Output Details:
`
> Begin a very brief introduction of Greece, then incorporate the local weather of a few towns
`
Could you please look into this? It seems like either the token usage data is not being captured accurately, or there might be an issue with how it's being reported in the example.
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: