You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Google Generative AI inference API allows for a user to set the responseLogprobs field to true in their request to gemini-1.5-flash, along with the optional logprobs field to specify the number of candidate logprobs in each step. When responseLogprobs is set to true, the response then includes logprobsCandidates in the response (see in the response body schema here). It additionally appears like the avgLogprobs field is returned regardless of responseLogprobs being sent, and would additionally be useful to include. I would like these fields to be exposed in some way (perhaps through provider metadata) in the request and response of the AI SDK functions using the Google Generative AI provider.
Use Cases
Logprobs can be useful for filtering and selecting different token responses, providing a means of assigning confidence to different LLM outputs.
Additional context
No response
The text was updated successfully, but these errors were encountered:
Feature Description
The Google Generative AI inference API allows for a user to set the
responseLogprobs
field totrue
in their request togemini-1.5-flash
, along with the optionallogprobs
field to specify the number of candidate logprobs in each step. WhenresponseLogprobs
is set to true, the response then includeslogprobsCandidates
in the response (see in the response body schema here). It additionally appears like theavgLogprobs
field is returned regardless ofresponseLogprobs
being sent, and would additionally be useful to include. I would like these fields to be exposed in some way (perhaps through provider metadata) in the request and response of the AI SDK functions using the Google Generative AI provider.Use Cases
Logprobs can be useful for filtering and selecting different token responses, providing a means of assigning confidence to different LLM outputs.
Additional context
No response
The text was updated successfully, but these errors were encountered: