Skip to content

Conversation

@metavind
Copy link

@metavind metavind commented Oct 7, 2025

Description

in the current vf-eval implementation, when using a vLLM backend, passing sampling parameters like top_k and min_p is not that straightforward. i have to pass a nested JSON to get it to work --sampling-args '{"top_p": 0.95, "extra_body": {"top_k": 20, "min_p": 0.05}}'

  • introduce sampling_utils to help with serialization of vLLM-specific sampling parameters
  • expand vf-eval CLI support with dedicated sampling flags similar to GRPOTrainer, while preserving JSON overrides
  • refactor GRPOTrainer to create sampling payloads using sampling utils

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Test improvement

Testing

  • All existing tests pass when running uv run pytest locally.
  • New tests have been added to cover the changes

Checklist

  • My code follows the style guidelines of this project as outlined in AGENTS.md
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

Additional Notes

The changes are made for easier support of sampling parameters such as top_k and min_p in vf-eval script when using vLLM backend. OpenAI backend does not support these arguments and will drop them. If using some other backend which supports these arguments but does not follow the vLLM format, they should still be passed using --sampling-args.

@CLAassistant
Copy link

CLAassistant commented Oct 7, 2025

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants