Skip to content

Conversation

jeffreyscarpenter
Copy link
Contributor

🐛 Problem Description

The content safety actions were failing with the following error:

WARNING: Error while execution 'content_safety_check_input' with parameters '{...}': 
Could not find prompt for task content_safety_check_input $model=content_safety and model nimchat/nvidia/llama-3_3-nemotron-super-49b-v1_5

🔍 Root Cause Analysis

The issue occurred in the get_task_model function in nemoguardrails/llm/prompts.py. When content safety actions were called with task names like content_safety_check_input $model=content_safety, the system was:

  1. Incorrectly using the main model (nvidia/llama-3_3-nemotron-super-49b-v1_5) instead of the specified content safety model
  2. Failing to parse the $model=content_safety part of the task name
  3. Defaulting to the main model for prompt resolution, causing the lookup to fail

🛠️ Solution

Updated the get_task_model function to properly handle task names with model specifications:

  • Added parsing for task names containing $model= specifications
  • Extracts model type from task names (e.g., content_safety from content_safety_check_input $model=content_safety)
  • Prioritizes specified models over default fallbacks

✅ Changes Made

  1. Enhanced Model Resolution Logic - Added parsing for $model= specifications
  2. Comprehensive Test Coverage - Added new test case with positive and negative scenarios
  3. Backward Compatibility - No breaking changes to existing functionality

🧪 Testing

  • ✅ All existing tests continue to pass
  • ✅ New test case validates the fix works correctly
  • ✅ Content safety actions now work as expected
  • ✅ Error messages no longer appear

🔧 Configuration Example

This fix enables proper functioning of configurations like:

models:
  - type: content_safety
    engine: nim
    model: nvidia/llama-3.1-nemoguard-8b-content-safety

rails:
  input:
    flows:
      - content safety check input $model=content_safety

🚀 Impact

Before: Content safety actions failed with confusing error messages
After: Content safety actions work as expected with proper model resolution

This is a non-breaking bug fix that improves the user experience for content safety workflows.

- Fix get_task_model function to properly parse task names with = specifications
- Extract model type from task names like 'content_safety_check_input =content_safety'
- Use extracted model type to find correct model configuration instead of defaulting to main model
- Add comprehensive test coverage for the new functionality
- Maintain backward compatibility for existing task names

Fixes issue where content safety actions would fail with error:
'Could not find prompt for task content_safety_check_input =content_safety
and model [main_model_name]'

This fix ensures that when a task specifies a model type via =, the system
correctly uses that model type for prompt resolution rather than falling back
to the main model.
@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 71.63%. Comparing base (960e0b8) to head (03d2bfb).

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1368      +/-   ##
===========================================
+ Coverage    71.62%   71.63%   +0.01%     
===========================================
  Files          171      171              
  Lines        17020    17026       +6     
===========================================
+ Hits         12191    12197       +6     
  Misses        4829     4829              
Flag Coverage Δ
python 71.63% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
nemoguardrails/llm/prompts.py 92.30% <100.00%> (+0.54%) ⬆️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@Pouyanpi
Copy link
Collaborator

Pouyanpi commented Sep 3, 2025

Thank you @jeffreyscarpenter!

I cannot reproduce this issue, would you please share a minimal config so I can reproduce it? btw, isn't it related to the microservice?

I don't notice any bug related to this PR in the codebase now.

The correct prompt selection path is in nemoguardrails/library/content_safety/actions.py:63-70

  1. Task construction with model parameter: task = f"content_safety_check_input $model={model_name}"
  2. prompt selection via LLMTaskManager:
llm_task_manager.render_task_prompt(task=task, context={...})
  1. Internal flow:

render_task_prompt() → get_prompt(config, task) → _get_prompt(task_name, model, mode, prompts)

The task string (including $model={model_name}) is used directly for prompt matching in nemoguardrails/llm/prompts.py. This is how prompt selection actually works for all guardrails in the library, through parameterized task construction and the LLMTaskManager interface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants