Replies: 1 comment
-
|
When using text/x.enum, the model performs constrained decoding, meaning it can only choose from a limited set of predefined options. As a result, the probability distribution is restricted to those options, causing one choice to receive an artificially high confidence (often ~99.999%). This differs from earlier examples because newer model versions produce sharper probability distributions under constraints, making logprobs less useful as a measure of true uncertainty. Avoiding text/x.enum and using free-form outputs |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I have tried implementing the logprobs classification filter system in the guide below, however have found that in practice the logprobs output for the top choice are always 99.999% confident (including for the exact example provided with the exact code provided) when using the text/x.enum (very different to the guide). What has changed here? Am I doing something wrong? I am finding it very hard to get anything practically useful from the AI.
https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/logprobs/intro_logprobs.ipynb#scrollTo=VyPmicX9RlZX
Beta Was this translation helpful? Give feedback.
All reactions