I’m building agents using the Agent API of the Python SDK and am using OpenAI's models (GPT-5-mini) where the provider rejects the max_tokens parameter and requires max_completion_tokens.
The current sampling_params schema for the Agents API only supports max_tokens, so when I attempt a tool-calling turn or agent invocation I get errors like:
400 BadRequestError: Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.
To ensure the SDK works with newer provider models (OpenAI/Azure etc), would it be possible to add support for max_completion_tokens in sampling_params for the Agents API, or optionally, alias max_tokens with max_completion_tokens for backward compatibility (although there could be issues with this).
Happy to assist with a pull request if helpful.