Skip to content

release: 0.17.0 #176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Aug 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.16.0"
".": "0.17.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-721e6ccaa72205ee14c71f8163129920464fb814b95d3df9567a9476bbd9b7fb.yml
openapi_spec_hash: 2115413a21df8b5bf9e4552a74df4312
config_hash: 9606bb315a193bfd8da0459040143242
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-6a1bfd4738fff02ef5becc3fdb2bf0cd6c026f2c924d4147a2a515474477dd9a.yml
openapi_spec_hash: 3eb8d86c06f0bb5e1190983e5acfc9ba
config_hash: a67c5e195a59855fe8a5db0dc61a3e7f
24 changes: 24 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,29 @@
# Changelog

## 0.17.0 (2025-08-08)

Full Changelog: [v0.16.0...v0.17.0](https://github.com/openai/openai-ruby/compare/v0.16.0...v0.17.0)

### Features

* **api:** adds GPT-5 and new API features: platform.openai.com/docs/guides/gpt-5 ([068a381](https://github.com/openai/openai-ruby/commit/068a381a17dd2d60865e67fcd17fa84d919f3f5c))
* **api:** manual updates ([1d79621](https://github.com/openai/openai-ruby/commit/1d79621120fbccc8dd41f5af6df5a9b1a9018e73))


### Bug Fixes

* **client:** dont try to parse if content is missing ([#770](https://github.com/openai/openai-ruby/issues/770)) ([7f8f2d3](https://github.com/openai/openai-ruby/commit/7f8f2d32863fafc39ee4a884937673a2ad9be358))
* **client:** fix verbosity parameter location in Responses ([a6b7ae8](https://github.com/openai/openai-ruby/commit/a6b7ae8b568c2214d4883fad44c9cf2e8a7d53e2))
* **internal:** fix rbi error ([803f20b](https://github.com/openai/openai-ruby/commit/803f20ba0c3751d28175dca99853783f0d851645))
* **respones:** undo accidently deleted fields ([#177](https://github.com/openai/openai-ruby/issues/177)) ([90a7c3a](https://github.com/openai/openai-ruby/commit/90a7c3ac8d22cc90b8ecaa3b091598ea3bc73029))
* **responses:** remove incorrect verbosity param ([127e2d1](https://github.com/openai/openai-ruby/commit/127e2d1b96b72307178446f0aa8acc1d3ad31367))


### Chores

* **internal:** increase visibility of internal helper method ([eddbcda](https://github.com/openai/openai-ruby/commit/eddbcda189ac0a864fc3dadc5dd3578d730c491f))
* update @stainless-api/prism-cli to v5.15.0 ([aaa7d89](https://github.com/openai/openai-ruby/commit/aaa7d895a3dba31f32cf5f4373a49d1571667fc6))

## 0.16.0 (2025-07-30)

Full Changelog: [v0.15.0...v0.16.0](https://github.com/openai/openai-ruby/compare/v0.15.0...v0.16.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.16.0)
openai (0.17.0)
connection_pool

GEM
Expand Down
34 changes: 14 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.16.0"
gem "openai", "~> 0.17.0"
```

<!-- x-release-please-end -->
Expand All @@ -30,10 +30,7 @@ openai = OpenAI::Client.new(
api_key: ENV["OPENAI_API_KEY"] # This is the default and can be omitted
)

chat_completion = openai.chat.completions.create(
messages: [{role: "user", content: "Say this is a test"}],
model: :"gpt-4.1"
)
chat_completion = openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")

puts(chat_completion)
```
Expand All @@ -45,7 +42,7 @@ We provide support for streaming responses using Server-Sent Events (SSE).
```ruby
stream = openai.responses.stream(
input: "Write a haiku about OpenAI.",
model: :"gpt-4.1"
model: :"gpt-5"
)

stream.each do |event|
Expand Down Expand Up @@ -343,7 +340,7 @@ openai = OpenAI::Client.new(
# Or, configure per-request:
openai.chat.completions.create(
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
model: :"gpt-4.1",
model: :"gpt-5",
request_options: {max_retries: 5}
)
```
Expand All @@ -361,7 +358,7 @@ openai = OpenAI::Client.new(
# Or, configure per-request:
openai.chat.completions.create(
messages: [{role: "user", content: "How can I list all files in a directory using Python?"}],
model: :"gpt-4.1",
model: :"gpt-5",
request_options: {timeout: 5}
)
```
Expand Down Expand Up @@ -396,7 +393,7 @@ Note: the `extra_` parameters of the same name overrides the documented paramete
chat_completion =
openai.chat.completions.create(
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
model: :"gpt-4.1",
model: :"gpt-5",
request_options: {
extra_query: {my_query_parameter: value},
extra_body: {my_body_parameter: value},
Expand Down Expand Up @@ -444,23 +441,20 @@ You can provide typesafe request parameters like so:
```ruby
openai.chat.completions.create(
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
model: :"gpt-4.1"
model: :"gpt-5"
)
```

Or, equivalently:

```ruby
# Hashes work, but are not typesafe:
openai.chat.completions.create(
messages: [{role: "user", content: "Say this is a test"}],
model: :"gpt-4.1"
)
openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")

# You can also splat a full Params class:
params = OpenAI::Chat::CompletionCreateParams.new(
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
model: :"gpt-4.1"
model: :"gpt-5"
)
openai.chat.completions.create(**params)
```
Expand All @@ -470,25 +464,25 @@ openai.chat.completions.create(**params)
Since this library does not depend on `sorbet-runtime`, it cannot provide [`T::Enum`](https://sorbet.org/docs/tenum) instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:

```ruby
# :low
puts(OpenAI::ReasoningEffort::LOW)
# :minimal
puts(OpenAI::ReasoningEffort::MINIMAL)

# Revealed type: `T.all(OpenAI::ReasoningEffort, Symbol)`
T.reveal_type(OpenAI::ReasoningEffort::LOW)
T.reveal_type(OpenAI::ReasoningEffort::MINIMAL)
```

Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:

```ruby
# Using the enum constants preserves the tagged type information:
openai.chat.completions.create(
reasoning_effort: OpenAI::ReasoningEffort::LOW,
reasoning_effort: OpenAI::ReasoningEffort::MINIMAL,
# …
)

# Literal values are also permissible:
openai.chat.completions.create(
reasoning_effort: :low,
reasoning_effort: :minimal,
# …
)
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,11 @@ class GetWeather < OpenAI::BaseModel
.reject { _1.message.refusal }
.flat_map { _1.message.tool_calls.to_a }
.each do |tool_call|
# parsed is an instance of `GetWeather`
pp(tool_call.function.parsed)
case tool_call
when OpenAI::Chat::ChatCompletionMessageFunctionToolCall
# parsed is an instance of `GetWeather`
pp(tool_call.function.parsed)
else
puts("Unexpected tool call type: #{tool_call.type}")
end
end
21 changes: 19 additions & 2 deletions lib/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,8 @@
require_relative "openai/models/beta/thread_stream_event"
require_relative "openai/models/beta/thread_update_params"
require_relative "openai/models/chat/chat_completion"
require_relative "openai/models/chat/chat_completion_allowed_tool_choice"
require_relative "openai/models/chat/chat_completion_allowed_tools"
require_relative "openai/models/chat/chat_completion_assistant_message_param"
require_relative "openai/models/chat/chat_completion_audio"
require_relative "openai/models/chat/chat_completion_audio_param"
Expand All @@ -192,14 +194,19 @@
require_relative "openai/models/chat/chat_completion_content_part_input_audio"
require_relative "openai/models/chat/chat_completion_content_part_refusal"
require_relative "openai/models/chat/chat_completion_content_part_text"
require_relative "openai/models/chat/chat_completion_custom_tool"
require_relative "openai/models/chat/chat_completion_deleted"
require_relative "openai/models/chat/chat_completion_developer_message_param"
require_relative "openai/models/chat/chat_completion_function_call_option"
require_relative "openai/models/chat/chat_completion_function_message_param"
require_relative "openai/models/chat/chat_completion_function_tool"
require_relative "openai/models/chat/chat_completion_message_custom_tool_call"
require_relative "openai/models/chat/chat_completion_message_function_tool_call"
require_relative "openai/models/chat/chat_completion_message_param"
require_relative "openai/models/chat/chat_completion_message_tool_call"
require_relative "openai/models/chat/chat_completion_modality"
require_relative "openai/models/chat/chat_completion_named_tool_choice"
require_relative "openai/models/chat/chat_completion_named_tool_choice_custom"
require_relative "openai/models/chat/chat_completion_prediction_content"
require_relative "openai/models/chat/chat_completion_reasoning_effort"
require_relative "openai/models/chat/chat_completion_role"
Expand Down Expand Up @@ -240,6 +247,7 @@
require_relative "openai/models/containers/file_retrieve_response"
require_relative "openai/models/containers/files/content_retrieve_params"
require_relative "openai/models/create_embedding_response"
require_relative "openai/models/custom_tool_input_format"
require_relative "openai/models/embedding"
require_relative "openai/models/embedding_create_params"
require_relative "openai/models/embedding_model"
Expand Down Expand Up @@ -348,7 +356,10 @@
require_relative "openai/models/response_format_json_object"
require_relative "openai/models/response_format_json_schema"
require_relative "openai/models/response_format_text"
require_relative "openai/models/response_format_text_grammar"
require_relative "openai/models/response_format_text_python"
require_relative "openai/models/responses/computer_tool"
require_relative "openai/models/responses/custom_tool"
require_relative "openai/models/responses/easy_input_message"
require_relative "openai/models/responses/file_search_tool"
require_relative "openai/models/responses/function_tool"
Expand All @@ -374,6 +385,10 @@
require_relative "openai/models/responses/response_content_part_done_event"
require_relative "openai/models/responses/response_created_event"
require_relative "openai/models/responses/response_create_params"
require_relative "openai/models/responses/response_custom_tool_call"
require_relative "openai/models/responses/response_custom_tool_call_input_delta_event"
require_relative "openai/models/responses/response_custom_tool_call_input_done_event"
require_relative "openai/models/responses/response_custom_tool_call_output"
require_relative "openai/models/responses/response_delete_params"
require_relative "openai/models/responses/response_error"
require_relative "openai/models/responses/response_error_event"
Expand Down Expand Up @@ -426,12 +441,12 @@
require_relative "openai/models/responses/response_prompt"
require_relative "openai/models/responses/response_queued_event"
require_relative "openai/models/responses/response_reasoning_item"
require_relative "openai/models/responses/response_reasoning_summary_delta_event"
require_relative "openai/models/responses/response_reasoning_summary_done_event"
require_relative "openai/models/responses/response_reasoning_summary_part_added_event"
require_relative "openai/models/responses/response_reasoning_summary_part_done_event"
require_relative "openai/models/responses/response_reasoning_summary_text_delta_event"
require_relative "openai/models/responses/response_reasoning_summary_text_done_event"
require_relative "openai/models/responses/response_reasoning_text_delta_event"
require_relative "openai/models/responses/response_reasoning_text_done_event"
require_relative "openai/models/responses/response_refusal_delta_event"
require_relative "openai/models/responses/response_refusal_done_event"
require_relative "openai/models/responses/response_retrieve_params"
Expand All @@ -445,6 +460,8 @@
require_relative "openai/models/responses/response_web_search_call_in_progress_event"
require_relative "openai/models/responses/response_web_search_call_searching_event"
require_relative "openai/models/responses/tool"
require_relative "openai/models/responses/tool_choice_allowed"
require_relative "openai/models/responses/tool_choice_custom"
require_relative "openai/models/responses/tool_choice_function"
require_relative "openai/models/responses/tool_choice_mcp"
require_relative "openai/models/responses/tool_choice_options"
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/internal/transport/base_client.rb
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ def initialize(
#
# @raise [OpenAI::Errors::APIError]
# @return [Array(Integer, Net::HTTPResponse, Enumerable<String>)]
private def send_request(request, redirect_count:, retry_count:, send_retry_header:)
def send_request(request, redirect_count:, retry_count:, send_retry_header:)
url, headers, max_retries, timeout = request.fetch_values(:url, :headers, :max_retries, :timeout)
input = {**request.except(:timeout), deadline: OpenAI::Internal::Util.monotonic_secs + timeout}

Expand Down
12 changes: 6 additions & 6 deletions lib/openai/internal/type/enum.rb
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,23 @@ module Type
# @example
# # `chat_model` is a `OpenAI::ChatModel`
# case chat_model
# when OpenAI::ChatModel::GPT_4_1
# when OpenAI::ChatModel::GPT_5
# # ...
# when OpenAI::ChatModel::GPT_4_1_MINI
# when OpenAI::ChatModel::GPT_5_MINI
# # ...
# when OpenAI::ChatModel::GPT_4_1_NANO
# when OpenAI::ChatModel::GPT_5_NANO
# # ...
# else
# puts(chat_model)
# end
#
# @example
# case chat_model
# in :"gpt-4.1"
# in :"gpt-5"
# # ...
# in :"gpt-4.1-mini"
# in :"gpt-5-mini"
# # ...
# in :"gpt-4.1-nano"
# in :"gpt-5-nano"
# # ...
# else
# puts(chat_model)
Expand Down
30 changes: 13 additions & 17 deletions lib/openai/internal/type/union.rb
Original file line number Diff line number Diff line change
Expand Up @@ -6,28 +6,24 @@ module Type
# @api private
#
# @example
# # `chat_completion_content_part` is a `OpenAI::Chat::ChatCompletionContentPart`
# case chat_completion_content_part
# when OpenAI::Chat::ChatCompletionContentPartText
# puts(chat_completion_content_part.text)
# when OpenAI::Chat::ChatCompletionContentPartImage
# puts(chat_completion_content_part.image_url)
# when OpenAI::Chat::ChatCompletionContentPartInputAudio
# puts(chat_completion_content_part.input_audio)
# # `custom_tool_input_format` is a `OpenAI::CustomToolInputFormat`
# case custom_tool_input_format
# when OpenAI::CustomToolInputFormat::Text
# puts(custom_tool_input_format.type)
# when OpenAI::CustomToolInputFormat::Grammar
# puts(custom_tool_input_format.definition)
# else
# puts(chat_completion_content_part)
# puts(custom_tool_input_format)
# end
#
# @example
# case chat_completion_content_part
# in {type: :text, text: text}
# puts(text)
# in {type: :image_url, image_url: image_url}
# puts(image_url)
# in {type: :input_audio, input_audio: input_audio}
# puts(input_audio)
# case custom_tool_input_format
# in {type: :text}
# # ...
# in {type: :grammar, definition: definition, syntax: syntax}
# puts(definition)
# else
# puts(chat_completion_content_part)
# puts(custom_tool_input_format)
# end
module Union
include OpenAI::Internal::Type::Converter
Expand Down
6 changes: 6 additions & 0 deletions lib/openai/models.rb
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,8 @@ module OpenAI

CreateEmbeddingResponse = OpenAI::Models::CreateEmbeddingResponse

CustomToolInputFormat = OpenAI::Models::CustomToolInputFormat

Embedding = OpenAI::Models::Embedding

EmbeddingCreateParams = OpenAI::Models::EmbeddingCreateParams
Expand Down Expand Up @@ -209,6 +211,10 @@ module OpenAI

ResponseFormatText = OpenAI::Models::ResponseFormatText

ResponseFormatTextGrammar = OpenAI::Models::ResponseFormatTextGrammar

ResponseFormatTextPython = OpenAI::Models::ResponseFormatTextPython

Responses = OpenAI::Models::Responses

ResponsesModel = OpenAI::Models::ResponsesModel
Expand Down
9 changes: 4 additions & 5 deletions lib/openai/models/beta/assistant_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,11 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
optional :name, String, nil?: true

# @!attribute reasoning_effort
# **o-series models only**
#
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
# result in faster responses and fewer tokens used on reasoning in a response.
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down Expand Up @@ -133,7 +132,7 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
#
# @param name [String, nil] The name of the assistant. The maximum length is 256 characters.
#
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] **o-series models only**
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
#
# @param response_format [Symbol, :auto, OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONObject, OpenAI::Models::ResponseFormatJSONSchema, nil] Specifies the format that the model must output. Compatible with [GPT-4o](https:
#
Expand Down
Loading