Skip to content

Commit 26ece0e

Browse files
feat(api): gpt 5.1
1 parent 239feb5 commit 26ece0e

File tree

123 files changed

+5146
-447
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

123 files changed

+5146
-447
lines changed

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 135
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-eeba8addf3a5f412e5ce8d22031e60c61650cee3f5d9e587a2533f6818a249ea.yml
3-
openapi_spec_hash: 0a4d8ad2469823ce24a3fd94f23f1c2b
4-
config_hash: 630eea84bb3067d25640419af058ed56
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ca24bc4d8125b5153514ce643c4e3220f25971b7d67ca384d56d493c72c0d977.yml
3+
openapi_spec_hash: c6f048c7b3d29f4de48fde0e845ba33f
4+
config_hash: b876221dfb213df9f0a999e75d38a65e

README.md

Lines changed: 20 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,10 @@ openai = OpenAI::Client.new(
3030
api_key: ENV["OPENAI_API_KEY"] # This is the default and can be omitted
3131
)
3232

33-
chat_completion = openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")
33+
chat_completion = openai.chat.completions.create(
34+
messages: [{role: "user", content: "Say this is a test"}],
35+
model: :"gpt-5.1"
36+
)
3437

3538
puts(chat_completion)
3639
```
@@ -42,7 +45,7 @@ We provide support for streaming responses using Server-Sent Events (SSE).
4245
```ruby
4346
stream = openai.responses.stream(
4447
input: "Write a haiku about OpenAI.",
45-
model: :"gpt-5"
48+
model: :"gpt-5.1"
4649
)
4750

4851
stream.each do |event|
@@ -340,7 +343,7 @@ openai = OpenAI::Client.new(
340343
# Or, configure per-request:
341344
openai.chat.completions.create(
342345
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
343-
model: :"gpt-5",
346+
model: :"gpt-5.1",
344347
request_options: {max_retries: 5}
345348
)
346349
```
@@ -358,7 +361,7 @@ openai = OpenAI::Client.new(
358361
# Or, configure per-request:
359362
openai.chat.completions.create(
360363
messages: [{role: "user", content: "How can I list all files in a directory using Python?"}],
361-
model: :"gpt-5",
364+
model: :"gpt-5.1",
362365
request_options: {timeout: 5}
363366
)
364367
```
@@ -393,7 +396,7 @@ Note: the `extra_` parameters of the same name overrides the documented paramete
393396
chat_completion =
394397
openai.chat.completions.create(
395398
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
396-
model: :"gpt-5",
399+
model: :"gpt-5.1",
397400
request_options: {
398401
extra_query: {my_query_parameter: value},
399402
extra_body: {my_body_parameter: value},
@@ -441,20 +444,23 @@ You can provide typesafe request parameters like so:
441444
```ruby
442445
openai.chat.completions.create(
443446
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
444-
model: :"gpt-5"
447+
model: :"gpt-5.1"
445448
)
446449
```
447450

448451
Or, equivalently:
449452

450453
```ruby
451454
# Hashes work, but are not typesafe:
452-
openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")
455+
openai.chat.completions.create(
456+
messages: [{role: "user", content: "Say this is a test"}],
457+
model: :"gpt-5.1"
458+
)
453459

454460
# You can also splat a full Params class:
455461
params = OpenAI::Chat::CompletionCreateParams.new(
456462
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
457-
model: :"gpt-5"
463+
model: :"gpt-5.1"
458464
)
459465
openai.chat.completions.create(**params)
460466
```
@@ -464,25 +470,25 @@ openai.chat.completions.create(**params)
464470
Since this library does not depend on `sorbet-runtime`, it cannot provide [`T::Enum`](https://sorbet.org/docs/tenum) instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:
465471

466472
```ruby
467-
# :minimal
468-
puts(OpenAI::ReasoningEffort::MINIMAL)
473+
# :"in-memory"
474+
puts(OpenAI::Chat::CompletionCreateParams::PromptCacheRetention::IN_MEMORY)
469475

470-
# Revealed type: `T.all(OpenAI::ReasoningEffort, Symbol)`
471-
T.reveal_type(OpenAI::ReasoningEffort::MINIMAL)
476+
# Revealed type: `T.all(OpenAI::Chat::CompletionCreateParams::PromptCacheRetention, Symbol)`
477+
T.reveal_type(OpenAI::Chat::CompletionCreateParams::PromptCacheRetention::IN_MEMORY)
472478
```
473479

474480
Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:
475481

476482
```ruby
477483
# Using the enum constants preserves the tagged type information:
478484
openai.chat.completions.create(
479-
reasoning_effort: OpenAI::ReasoningEffort::MINIMAL,
485+
prompt_cache_retention: OpenAI::Chat::CompletionCreateParams::PromptCacheRetention::IN_MEMORY,
480486
#
481487
)
482488

483489
# Literal values are also permissible:
484490
openai.chat.completions.create(
485-
reasoning_effort: :minimal,
491+
prompt_cache_retention: :"in-memory",
486492
#
487493
)
488494
```

lib/openai.rb

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -528,15 +528,19 @@
528528
require_relative "openai/models/response_format_text"
529529
require_relative "openai/models/response_format_text_grammar"
530530
require_relative "openai/models/response_format_text_python"
531+
require_relative "openai/models/responses/apply_patch_tool"
531532
require_relative "openai/models/responses/computer_tool"
532533
require_relative "openai/models/responses/custom_tool"
533534
require_relative "openai/models/responses/easy_input_message"
534535
require_relative "openai/models/responses/file_search_tool"
536+
require_relative "openai/models/responses/function_shell_tool"
535537
require_relative "openai/models/responses/function_tool"
536538
require_relative "openai/models/responses/input_item_list_params"
537539
require_relative "openai/models/responses/input_token_count_params"
538540
require_relative "openai/models/responses/input_token_count_response"
539541
require_relative "openai/models/responses/response"
542+
require_relative "openai/models/responses/response_apply_patch_tool_call"
543+
require_relative "openai/models/responses/response_apply_patch_tool_call_output"
540544
require_relative "openai/models/responses/response_audio_delta_event"
541545
require_relative "openai/models/responses/response_audio_done_event"
542546
require_relative "openai/models/responses/response_audio_transcript_delta_event"
@@ -576,6 +580,9 @@
576580
require_relative "openai/models/responses/response_function_call_arguments_done_event"
577581
require_relative "openai/models/responses/response_function_call_output_item"
578582
require_relative "openai/models/responses/response_function_call_output_item_list"
583+
require_relative "openai/models/responses/response_function_shell_call_output_content"
584+
require_relative "openai/models/responses/response_function_shell_tool_call"
585+
require_relative "openai/models/responses/response_function_shell_tool_call_output"
579586
require_relative "openai/models/responses/response_function_tool_call_item"
580587
require_relative "openai/models/responses/response_function_tool_call_output_item"
581588
require_relative "openai/models/responses/response_function_web_search"
@@ -634,10 +641,12 @@
634641
require_relative "openai/models/responses/response_web_search_call_searching_event"
635642
require_relative "openai/models/responses/tool"
636643
require_relative "openai/models/responses/tool_choice_allowed"
644+
require_relative "openai/models/responses/tool_choice_apply_patch"
637645
require_relative "openai/models/responses/tool_choice_custom"
638646
require_relative "openai/models/responses/tool_choice_function"
639647
require_relative "openai/models/responses/tool_choice_mcp"
640648
require_relative "openai/models/responses/tool_choice_options"
649+
require_relative "openai/models/responses/tool_choice_shell"
641650
require_relative "openai/models/responses/tool_choice_types"
642651
require_relative "openai/models/responses/web_search_preview_tool"
643652
require_relative "openai/models/responses/web_search_tool"

lib/openai/internal/type/enum.rb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,23 +19,23 @@ module Type
1919
# @example
2020
# # `chat_model` is a `OpenAI::ChatModel`
2121
# case chat_model
22-
# when OpenAI::ChatModel::GPT_5
22+
# when OpenAI::ChatModel::GPT_5_1
2323
# # ...
24-
# when OpenAI::ChatModel::GPT_5_MINI
24+
# when OpenAI::ChatModel::GPT_5_1_2025_11_13
2525
# # ...
26-
# when OpenAI::ChatModel::GPT_5_NANO
26+
# when OpenAI::ChatModel::GPT_5_1_CODEX
2727
# # ...
2828
# else
2929
# puts(chat_model)
3030
# end
3131
#
3232
# @example
3333
# case chat_model
34-
# in :"gpt-5"
34+
# in :"gpt-5.1"
3535
# # ...
36-
# in :"gpt-5-mini"
36+
# in :"gpt-5.1-2025-11-13"
3737
# # ...
38-
# in :"gpt-5-nano"
38+
# in :"gpt-5.1-codex"
3939
# # ...
4040
# else
4141
# puts(chat_model)

lib/openai/models/batch_create_params.rb

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,10 @@ class BatchCreateParams < OpenAI::Internal::Type::BaseModel
1616

1717
# @!attribute endpoint
1818
# The endpoint to be used for all requests in the batch. Currently
19-
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
20-
# are supported. Note that `/v1/embeddings` batches are also restricted to a
21-
# maximum of 50,000 embedding inputs across all requests in the batch.
19+
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, `/v1/completions`,
20+
# and `/v1/moderations` are supported. Note that `/v1/embeddings` batches are also
21+
# restricted to a maximum of 50,000 embedding inputs across all requests in the
22+
# batch.
2223
#
2324
# @return [Symbol, OpenAI::Models::BatchCreateParams::Endpoint]
2425
required :endpoint, enum: -> { OpenAI::BatchCreateParams::Endpoint }
@@ -83,16 +84,18 @@ module CompletionWindow
8384
end
8485

8586
# The endpoint to be used for all requests in the batch. Currently
86-
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
87-
# are supported. Note that `/v1/embeddings` batches are also restricted to a
88-
# maximum of 50,000 embedding inputs across all requests in the batch.
87+
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, `/v1/completions`,
88+
# and `/v1/moderations` are supported. Note that `/v1/embeddings` batches are also
89+
# restricted to a maximum of 50,000 embedding inputs across all requests in the
90+
# batch.
8991
module Endpoint
9092
extend OpenAI::Internal::Type::Enum
9193

9294
V1_RESPONSES = :"/v1/responses"
9395
V1_CHAT_COMPLETIONS = :"/v1/chat/completions"
9496
V1_EMBEDDINGS = :"/v1/embeddings"
9597
V1_COMPLETIONS = :"/v1/completions"
98+
V1_MODERATIONS = :"/v1/moderations"
9699

97100
# @!method self.values
98101
# @return [Array<Symbol>]

lib/openai/models/beta/assistant_create_params.rb

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -51,12 +51,16 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
5151
# @!attribute reasoning_effort
5252
# Constrains effort on reasoning for
5353
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
54-
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
55-
# effort can result in faster responses and fewer tokens used on reasoning in a
56-
# response.
54+
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
55+
# reasoning effort can result in faster responses and fewer tokens used on
56+
# reasoning in a response.
5757
#
58-
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
59-
# effort.
58+
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
59+
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
60+
# calls are supported for all reasoning values in gpt-5.1.
61+
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
62+
# support `none`.
63+
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
6064
#
6165
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
6266
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

lib/openai/models/beta/assistant_update_params.rb

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -51,12 +51,16 @@ class AssistantUpdateParams < OpenAI::Internal::Type::BaseModel
5151
# @!attribute reasoning_effort
5252
# Constrains effort on reasoning for
5353
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
54-
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
55-
# effort can result in faster responses and fewer tokens used on reasoning in a
56-
# response.
54+
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
55+
# reasoning effort can result in faster responses and fewer tokens used on
56+
# reasoning in a response.
5757
#
58-
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
59-
# effort.
58+
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
59+
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
60+
# calls are supported for all reasoning values in gpt-5.1.
61+
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
62+
# support `none`.
63+
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
6064
#
6165
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
6266
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

lib/openai/models/beta/threads/run_create_params.rb

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -109,12 +109,16 @@ class RunCreateParams < OpenAI::Internal::Type::BaseModel
109109
# @!attribute reasoning_effort
110110
# Constrains effort on reasoning for
111111
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
112-
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
113-
# effort can result in faster responses and fewer tokens used on reasoning in a
114-
# response.
115-
#
116-
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
117-
# effort.
112+
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
113+
# reasoning effort can result in faster responses and fewer tokens used on
114+
# reasoning in a response.
115+
#
116+
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
117+
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
118+
# calls are supported for all reasoning values in gpt-5.1.
119+
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
120+
# support `none`.
121+
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
118122
#
119123
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
120124
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

lib/openai/models/chat/completion_create_params.rb

Lines changed: 37 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -190,15 +190,30 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
190190
# @return [String, nil]
191191
optional :prompt_cache_key, String
192192

193+
# @!attribute prompt_cache_retention
194+
# The retention policy for the prompt cache. Set to `24h` to enable extended
195+
# prompt caching, which keeps cached prefixes active for longer, up to a maximum
196+
# of 24 hours.
197+
# [Learn more](https://platform.openai.com/docs/guides/prompt-caching#prompt-cache-retention).
198+
#
199+
# @return [Symbol, OpenAI::Models::Chat::CompletionCreateParams::PromptCacheRetention, nil]
200+
optional :prompt_cache_retention,
201+
enum: -> { OpenAI::Chat::CompletionCreateParams::PromptCacheRetention },
202+
nil?: true
203+
193204
# @!attribute reasoning_effort
194205
# Constrains effort on reasoning for
195206
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
196-
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
197-
# effort can result in faster responses and fewer tokens used on reasoning in a
198-
# response.
207+
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
208+
# reasoning effort can result in faster responses and fewer tokens used on
209+
# reasoning in a response.
199210
#
200-
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
201-
# effort.
211+
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
212+
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
213+
# calls are supported for all reasoning values in gpt-5.1.
214+
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
215+
# support `none`.
216+
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
202217
#
203218
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
204219
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
@@ -368,7 +383,7 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
368383
# @return [OpenAI::Models::Chat::CompletionCreateParams::WebSearchOptions, nil]
369384
optional :web_search_options, -> { OpenAI::Chat::CompletionCreateParams::WebSearchOptions }
370385

371-
# @!method initialize(messages:, model:, audio: nil, frequency_penalty: nil, function_call: nil, functions: nil, logit_bias: nil, logprobs: nil, max_completion_tokens: nil, max_tokens: nil, metadata: nil, modalities: nil, n: nil, parallel_tool_calls: nil, prediction: nil, presence_penalty: nil, prompt_cache_key: nil, reasoning_effort: nil, response_format: nil, safety_identifier: nil, seed: nil, service_tier: nil, stop: nil, store: nil, stream_options: nil, temperature: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, user: nil, verbosity: nil, web_search_options: nil, request_options: {})
386+
# @!method initialize(messages:, model:, audio: nil, frequency_penalty: nil, function_call: nil, functions: nil, logit_bias: nil, logprobs: nil, max_completion_tokens: nil, max_tokens: nil, metadata: nil, modalities: nil, n: nil, parallel_tool_calls: nil, prediction: nil, presence_penalty: nil, prompt_cache_key: nil, prompt_cache_retention: nil, reasoning_effort: nil, response_format: nil, safety_identifier: nil, seed: nil, service_tier: nil, stop: nil, store: nil, stream_options: nil, temperature: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, user: nil, verbosity: nil, web_search_options: nil, request_options: {})
372387
# Some parameter documentations has been truncated, see
373388
# {OpenAI::Models::Chat::CompletionCreateParams} for more details.
374389
#
@@ -406,6 +421,8 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
406421
#
407422
# @param prompt_cache_key [String] Used by OpenAI to cache responses for similar requests to optimize your cache hi
408423
#
424+
# @param prompt_cache_retention [Symbol, OpenAI::Models::Chat::CompletionCreateParams::PromptCacheRetention, nil] The retention policy for the prompt cache. Set to `24h` to enable extended promp
425+
#
409426
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
410427
#
411428
# @param response_format [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::StructuredOutput::JsonSchemaConverter, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
@@ -551,6 +568,20 @@ module Modality
551568
# @return [Array<Symbol>]
552569
end
553570

571+
# The retention policy for the prompt cache. Set to `24h` to enable extended
572+
# prompt caching, which keeps cached prefixes active for longer, up to a maximum
573+
# of 24 hours.
574+
# [Learn more](https://platform.openai.com/docs/guides/prompt-caching#prompt-cache-retention).
575+
module PromptCacheRetention
576+
extend OpenAI::Internal::Type::Enum
577+
578+
IN_MEMORY = :"in-memory"
579+
PROMPT_CACHE_RETENTION_24H = :"24h"
580+
581+
# @!method self.values
582+
# @return [Array<Symbol>]
583+
end
584+
554585
# An object specifying the format that the model must output.
555586
#
556587
# Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured

lib/openai/models/chat_model.rb

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,11 @@ module Models
55
module ChatModel
66
extend OpenAI::Internal::Type::Enum
77

8+
GPT_5_1 = :"gpt-5.1"
9+
GPT_5_1_2025_11_13 = :"gpt-5.1-2025-11-13"
10+
GPT_5_1_CODEX = :"gpt-5.1-codex"
11+
GPT_5_1_MINI = :"gpt-5.1-mini"
12+
GPT_5_1_CHAT_LATEST = :"gpt-5.1-chat-latest"
813
GPT_5 = :"gpt-5"
914
GPT_5_MINI = :"gpt-5-mini"
1015
GPT_5_NANO = :"gpt-5-nano"

0 commit comments

Comments
 (0)