Skip to content

Commit 068a381

Browse files
feat(api): adds GPT-5 and new API features: platform.openai.com/docs/guides/gpt-5
1 parent 1d79621 commit 068a381

File tree

150 files changed

+4793
-637
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

150 files changed

+4793
-637
lines changed

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 109
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d6a16b25b969c3e5382e7d413de15bf83d5f7534d5c3ecce64d3a7e847418f9e.yml
3-
openapi_spec_hash: 0c0bcf4aee9ca2a948dd14b890dfe728
4-
config_hash: aeff9289bd7f8c8482e4d738c3c2fde1
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-f5c45f4ae5c2075cbc603d6910bba3da31c23714c209fbd3fd82a94f634a126b.yml
3+
openapi_spec_hash: 3eb8d86c06f0bb5e1190983e5acfc9ba
4+
config_hash: 9a64321968e21ed72f5c0e02164ea00d

README.md

Lines changed: 13 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -30,10 +30,7 @@ openai = OpenAI::Client.new(
3030
api_key: ENV["OPENAI_API_KEY"] # This is the default and can be omitted
3131
)
3232

33-
chat_completion = openai.chat.completions.create(
34-
messages: [{role: "user", content: "Say this is a test"}],
35-
model: :"gpt-4.1"
36-
)
33+
chat_completion = openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")
3734

3835
puts(chat_completion)
3936
```
@@ -45,7 +42,7 @@ We provide support for streaming responses using Server-Sent Events (SSE).
4542
```ruby
4643
stream = openai.responses.stream(
4744
input: "Write a haiku about OpenAI.",
48-
model: :"gpt-4.1"
45+
model: :"gpt-5"
4946
)
5047

5148
stream.each do |event|
@@ -343,7 +340,7 @@ openai = OpenAI::Client.new(
343340
# Or, configure per-request:
344341
openai.chat.completions.create(
345342
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
346-
model: :"gpt-4.1",
343+
model: :"gpt-5",
347344
request_options: {max_retries: 5}
348345
)
349346
```
@@ -361,7 +358,7 @@ openai = OpenAI::Client.new(
361358
# Or, configure per-request:
362359
openai.chat.completions.create(
363360
messages: [{role: "user", content: "How can I list all files in a directory using Python?"}],
364-
model: :"gpt-4.1",
361+
model: :"gpt-5",
365362
request_options: {timeout: 5}
366363
)
367364
```
@@ -396,7 +393,7 @@ Note: the `extra_` parameters of the same name overrides the documented paramete
396393
chat_completion =
397394
openai.chat.completions.create(
398395
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
399-
model: :"gpt-4.1",
396+
model: :"gpt-5",
400397
request_options: {
401398
extra_query: {my_query_parameter: value},
402399
extra_body: {my_body_parameter: value},
@@ -444,23 +441,20 @@ You can provide typesafe request parameters like so:
444441
```ruby
445442
openai.chat.completions.create(
446443
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
447-
model: :"gpt-4.1"
444+
model: :"gpt-5"
448445
)
449446
```
450447

451448
Or, equivalently:
452449

453450
```ruby
454451
# Hashes work, but are not typesafe:
455-
openai.chat.completions.create(
456-
messages: [{role: "user", content: "Say this is a test"}],
457-
model: :"gpt-4.1"
458-
)
452+
openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")
459453

460454
# You can also splat a full Params class:
461455
params = OpenAI::Chat::CompletionCreateParams.new(
462456
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
463-
model: :"gpt-4.1"
457+
model: :"gpt-5"
464458
)
465459
openai.chat.completions.create(**params)
466460
```
@@ -470,25 +464,25 @@ openai.chat.completions.create(**params)
470464
Since this library does not depend on `sorbet-runtime`, it cannot provide [`T::Enum`](https://sorbet.org/docs/tenum) instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:
471465

472466
```ruby
473-
# :low
474-
puts(OpenAI::ReasoningEffort::LOW)
467+
# :minimal
468+
puts(OpenAI::ReasoningEffort::MINIMAL)
475469

476470
# Revealed type: `T.all(OpenAI::ReasoningEffort, Symbol)`
477-
T.reveal_type(OpenAI::ReasoningEffort::LOW)
471+
T.reveal_type(OpenAI::ReasoningEffort::MINIMAL)
478472
```
479473

480474
Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:
481475

482476
```ruby
483477
# Using the enum constants preserves the tagged type information:
484478
openai.chat.completions.create(
485-
reasoning_effort: OpenAI::ReasoningEffort::LOW,
479+
reasoning_effort: OpenAI::ReasoningEffort::MINIMAL,
486480
#
487481
)
488482

489483
# Literal values are also permissible:
490484
openai.chat.completions.create(
491-
reasoning_effort: :low,
485+
reasoning_effort: :minimal,
492486
#
493487
)
494488
```

examples/structured_outputs_chat_completions_function_calling.rb

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,11 @@ class GetWeather < OpenAI::BaseModel
2727
.reject { _1.message.refusal }
2828
.flat_map { _1.message.tool_calls.to_a }
2929
.each do |tool_call|
30-
# parsed is an instance of `GetWeather`
31-
pp(tool_call.function.parsed)
30+
case tool_call
31+
when OpenAI::Chat::ChatCompletionMessageFunctionToolCall
32+
# parsed is an instance of `GetWeather`
33+
pp(tool_call.function.parsed)
34+
else
35+
puts("Unexpected tool call type: #{tool_call.type}")
36+
end
3237
end

lib/openai.rb

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -183,6 +183,8 @@
183183
require_relative "openai/models/beta/thread_stream_event"
184184
require_relative "openai/models/beta/thread_update_params"
185185
require_relative "openai/models/chat/chat_completion"
186+
require_relative "openai/models/chat/chat_completion_allowed_tool_choice"
187+
require_relative "openai/models/chat/chat_completion_allowed_tools"
186188
require_relative "openai/models/chat/chat_completion_assistant_message_param"
187189
require_relative "openai/models/chat/chat_completion_audio"
188190
require_relative "openai/models/chat/chat_completion_audio_param"
@@ -192,14 +194,19 @@
192194
require_relative "openai/models/chat/chat_completion_content_part_input_audio"
193195
require_relative "openai/models/chat/chat_completion_content_part_refusal"
194196
require_relative "openai/models/chat/chat_completion_content_part_text"
197+
require_relative "openai/models/chat/chat_completion_custom_tool"
195198
require_relative "openai/models/chat/chat_completion_deleted"
196199
require_relative "openai/models/chat/chat_completion_developer_message_param"
197200
require_relative "openai/models/chat/chat_completion_function_call_option"
198201
require_relative "openai/models/chat/chat_completion_function_message_param"
202+
require_relative "openai/models/chat/chat_completion_function_tool"
203+
require_relative "openai/models/chat/chat_completion_message_custom_tool_call"
204+
require_relative "openai/models/chat/chat_completion_message_function_tool_call"
199205
require_relative "openai/models/chat/chat_completion_message_param"
200206
require_relative "openai/models/chat/chat_completion_message_tool_call"
201207
require_relative "openai/models/chat/chat_completion_modality"
202208
require_relative "openai/models/chat/chat_completion_named_tool_choice"
209+
require_relative "openai/models/chat/chat_completion_named_tool_choice_custom"
203210
require_relative "openai/models/chat/chat_completion_prediction_content"
204211
require_relative "openai/models/chat/chat_completion_reasoning_effort"
205212
require_relative "openai/models/chat/chat_completion_role"
@@ -240,6 +247,7 @@
240247
require_relative "openai/models/containers/file_retrieve_response"
241248
require_relative "openai/models/containers/files/content_retrieve_params"
242249
require_relative "openai/models/create_embedding_response"
250+
require_relative "openai/models/custom_tool_input_format"
243251
require_relative "openai/models/embedding"
244252
require_relative "openai/models/embedding_create_params"
245253
require_relative "openai/models/embedding_model"
@@ -348,7 +356,10 @@
348356
require_relative "openai/models/response_format_json_object"
349357
require_relative "openai/models/response_format_json_schema"
350358
require_relative "openai/models/response_format_text"
359+
require_relative "openai/models/response_format_text_grammar"
360+
require_relative "openai/models/response_format_text_python"
351361
require_relative "openai/models/responses/computer_tool"
362+
require_relative "openai/models/responses/custom_tool"
352363
require_relative "openai/models/responses/easy_input_message"
353364
require_relative "openai/models/responses/file_search_tool"
354365
require_relative "openai/models/responses/function_tool"
@@ -374,6 +385,10 @@
374385
require_relative "openai/models/responses/response_content_part_done_event"
375386
require_relative "openai/models/responses/response_created_event"
376387
require_relative "openai/models/responses/response_create_params"
388+
require_relative "openai/models/responses/response_custom_tool_call"
389+
require_relative "openai/models/responses/response_custom_tool_call_input_delta_event"
390+
require_relative "openai/models/responses/response_custom_tool_call_input_done_event"
391+
require_relative "openai/models/responses/response_custom_tool_call_output"
377392
require_relative "openai/models/responses/response_delete_params"
378393
require_relative "openai/models/responses/response_error"
379394
require_relative "openai/models/responses/response_error_event"
@@ -445,6 +460,8 @@
445460
require_relative "openai/models/responses/response_web_search_call_in_progress_event"
446461
require_relative "openai/models/responses/response_web_search_call_searching_event"
447462
require_relative "openai/models/responses/tool"
463+
require_relative "openai/models/responses/tool_choice_allowed"
464+
require_relative "openai/models/responses/tool_choice_custom"
448465
require_relative "openai/models/responses/tool_choice_function"
449466
require_relative "openai/models/responses/tool_choice_mcp"
450467
require_relative "openai/models/responses/tool_choice_options"

lib/openai/internal/type/enum.rb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,23 +19,23 @@ module Type
1919
# @example
2020
# # `chat_model` is a `OpenAI::ChatModel`
2121
# case chat_model
22-
# when OpenAI::ChatModel::GPT_4_1
22+
# when OpenAI::ChatModel::GPT_5
2323
# # ...
24-
# when OpenAI::ChatModel::GPT_4_1_MINI
24+
# when OpenAI::ChatModel::GPT_5_MINI
2525
# # ...
26-
# when OpenAI::ChatModel::GPT_4_1_NANO
26+
# when OpenAI::ChatModel::GPT_5_NANO
2727
# # ...
2828
# else
2929
# puts(chat_model)
3030
# end
3131
#
3232
# @example
3333
# case chat_model
34-
# in :"gpt-4.1"
34+
# in :"gpt-5"
3535
# # ...
36-
# in :"gpt-4.1-mini"
36+
# in :"gpt-5-mini"
3737
# # ...
38-
# in :"gpt-4.1-nano"
38+
# in :"gpt-5-nano"
3939
# # ...
4040
# else
4141
# puts(chat_model)

lib/openai/internal/type/union.rb

Lines changed: 13 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -6,28 +6,24 @@ module Type
66
# @api private
77
#
88
# @example
9-
# # `chat_completion_content_part` is a `OpenAI::Chat::ChatCompletionContentPart`
10-
# case chat_completion_content_part
11-
# when OpenAI::Chat::ChatCompletionContentPartText
12-
# puts(chat_completion_content_part.text)
13-
# when OpenAI::Chat::ChatCompletionContentPartImage
14-
# puts(chat_completion_content_part.image_url)
15-
# when OpenAI::Chat::ChatCompletionContentPartInputAudio
16-
# puts(chat_completion_content_part.input_audio)
9+
# # `custom_tool_input_format` is a `OpenAI::CustomToolInputFormat`
10+
# case custom_tool_input_format
11+
# when OpenAI::CustomToolInputFormat::Text
12+
# puts(custom_tool_input_format.type)
13+
# when OpenAI::CustomToolInputFormat::Grammar
14+
# puts(custom_tool_input_format.definition)
1715
# else
18-
# puts(chat_completion_content_part)
16+
# puts(custom_tool_input_format)
1917
# end
2018
#
2119
# @example
22-
# case chat_completion_content_part
23-
# in {type: :text, text: text}
24-
# puts(text)
25-
# in {type: :image_url, image_url: image_url}
26-
# puts(image_url)
27-
# in {type: :input_audio, input_audio: input_audio}
28-
# puts(input_audio)
20+
# case custom_tool_input_format
21+
# in {type: :text}
22+
# # ...
23+
# in {type: :grammar, definition: definition, syntax: syntax}
24+
# puts(definition)
2925
# else
30-
# puts(chat_completion_content_part)
26+
# puts(custom_tool_input_format)
3127
# end
3228
module Union
3329
include OpenAI::Internal::Type::Converter

lib/openai/models.rb

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,8 @@ module OpenAI
9393

9494
CreateEmbeddingResponse = OpenAI::Models::CreateEmbeddingResponse
9595

96+
CustomToolInputFormat = OpenAI::Models::CustomToolInputFormat
97+
9698
Embedding = OpenAI::Models::Embedding
9799

98100
EmbeddingCreateParams = OpenAI::Models::EmbeddingCreateParams
@@ -209,6 +211,10 @@ module OpenAI
209211

210212
ResponseFormatText = OpenAI::Models::ResponseFormatText
211213

214+
ResponseFormatTextGrammar = OpenAI::Models::ResponseFormatTextGrammar
215+
216+
ResponseFormatTextPython = OpenAI::Models::ResponseFormatTextPython
217+
212218
Responses = OpenAI::Models::Responses
213219

214220
ResponsesModel = OpenAI::Models::ResponsesModel

lib/openai/models/beta/assistant_create_params.rb

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,12 +49,11 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
4949
optional :name, String, nil?: true
5050

5151
# @!attribute reasoning_effort
52-
# **o-series models only**
53-
#
5452
# Constrains effort on reasoning for
5553
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
56-
# supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
57-
# result in faster responses and fewer tokens used on reasoning in a response.
54+
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
55+
# effort can result in faster responses and fewer tokens used on reasoning in a
56+
# response.
5857
#
5958
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
6059
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
@@ -133,7 +132,7 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
133132
#
134133
# @param name [String, nil] The name of the assistant. The maximum length is 256 characters.
135134
#
136-
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] **o-series models only**
135+
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
137136
#
138137
# @param response_format [Symbol, :auto, OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONObject, OpenAI::Models::ResponseFormatJSONSchema, nil] Specifies the format that the model must output. Compatible with [GPT-4o](https:
139138
#

lib/openai/models/beta/assistant_update_params.rb

Lines changed: 22 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,12 +49,11 @@ class AssistantUpdateParams < OpenAI::Internal::Type::BaseModel
4949
optional :name, String, nil?: true
5050

5151
# @!attribute reasoning_effort
52-
# **o-series models only**
53-
#
5452
# Constrains effort on reasoning for
5553
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
56-
# supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
57-
# result in faster responses and fewer tokens used on reasoning in a response.
54+
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
55+
# effort can result in faster responses and fewer tokens used on reasoning in a
56+
# response.
5857
#
5958
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
6059
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
@@ -133,7 +132,7 @@ class AssistantUpdateParams < OpenAI::Internal::Type::BaseModel
133132
#
134133
# @param name [String, nil] The name of the assistant. The maximum length is 256 characters.
135134
#
136-
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] **o-series models only**
135+
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
137136
#
138137
# @param response_format [Symbol, :auto, OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONObject, OpenAI::Models::ResponseFormatJSONSchema, nil] Specifies the format that the model must output. Compatible with [GPT-4o](https:
139138
#
@@ -157,6 +156,18 @@ module Model
157156

158157
variant String
159158

159+
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_5 }
160+
161+
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_5_MINI }
162+
163+
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_5_NANO }
164+
165+
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_5_2025_08_07 }
166+
167+
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_5_MINI_2025_08_07 }
168+
169+
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_5_NANO_2025_08_07 }
170+
160171
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_4_1 }
161172

162173
variant const: -> { OpenAI::Models::Beta::AssistantUpdateParams::Model::GPT_4_1_MINI }
@@ -238,6 +249,12 @@ module Model
238249

239250
# @!group
240251

252+
GPT_5 = :"gpt-5"
253+
GPT_5_MINI = :"gpt-5-mini"
254+
GPT_5_NANO = :"gpt-5-nano"
255+
GPT_5_2025_08_07 = :"gpt-5-2025-08-07"
256+
GPT_5_MINI_2025_08_07 = :"gpt-5-mini-2025-08-07"
257+
GPT_5_NANO_2025_08_07 = :"gpt-5-nano-2025-08-07"
241258
GPT_4_1 = :"gpt-4.1"
242259
GPT_4_1_MINI = :"gpt-4.1-mini"
243260
GPT_4_1_NANO = :"gpt-4.1-nano"

lib/openai/models/beta/threads/run_create_params.rb

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -107,12 +107,11 @@ class RunCreateParams < OpenAI::Internal::Type::BaseModel
107107
optional :parallel_tool_calls, OpenAI::Internal::Type::Boolean
108108

109109
# @!attribute reasoning_effort
110-
# **o-series models only**
111-
#
112110
# Constrains effort on reasoning for
113111
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
114-
# supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
115-
# result in faster responses and fewer tokens used on reasoning in a response.
112+
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
113+
# effort can result in faster responses and fewer tokens used on reasoning in a
114+
# response.
116115
#
117116
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
118117
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
@@ -216,7 +215,7 @@ class RunCreateParams < OpenAI::Internal::Type::BaseModel
216215
#
217216
# @param parallel_tool_calls [Boolean] Whether to enable [parallel function calling](https://platform.openai.com/docs/g
218217
#
219-
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] **o-series models only**
218+
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
220219
#
221220
# @param response_format [Symbol, :auto, OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONObject, OpenAI::Models::ResponseFormatJSONSchema, nil] Specifies the format that the model must output. Compatible with [GPT-4o](https:
222221
#

0 commit comments

Comments
 (0)