Should Content property ever come back null with FinishReason=ChatFilter with no exception thrown? #10345
-
I wrote this code to help reproduce the issue as fast as possible, though it may take a handful of attempts (just hit the enter button when prompted). I throw a break point in and inspect the returned message and find a null Content property. The finish reason is ChatFilter, and there is a positive token count. By digging through the data, I can even piece together what the message would have been. Interestingly, the raw data indicates that none of the filters were triggered. Do I need to write code to detect this condition, or is this a bug?
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Tagging @SergeyMenshykh |
Beta Was this translation helpful? Give feedback.
-
Hi @pamims, I think the reason you see output tokens consumed is that the LLM reasoned over the prompt and provided a result, but it was filtered out by the Azure OpenAI Service content filtering system. You can identify which filter was triggered by using the CC: @RogerBarreto |
Beta Was this translation helpful? Give feedback.
-
Hi @pamims, The behavior you are experiencing is regulated by the Azure OpenAI Service content filtering settings. According to the documentation (https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#scenario-details), the reason you are receiving an HTTP 400 error is that your prompt has been classified as one that must be blocked at a filtered category and severity level configured in the content filter. If your prompt is safe, as far as I understand, but the completion generated by the model violates the content filtering rules, you will receive an HTTP 200 status code with the finish reason "ContentFilter." To change this behavior, you need to modify the content filtering settings by either creating a new content filter that overrides the settings of the default one or by modifying the settings of an existing filter. |
Beta Was this translation helpful? Give feedback.
Hi @pamims, I think the reason you see output tokens consumed is that the LLM reasoned over the prompt and provided a result, but it was filtered out by the Azure OpenAI Service content filtering system. You can identify which filter was triggered by using the
GetResponseContentFilterResult
extension method from theAzure.AI.OpenAI.Chat;
namespace:CC: @RogerBarreto