Skip to content

Conversation

vcshih
Copy link

@vcshih vcshih commented Jul 21, 2025

No description provided.

github-actions bot and others added 30 commits July 19, 2025 08:47
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
---
[//]: # (BEGIN SAPLING FOOTER)
* #1198
* __->__ #1185
Made a mistake here - we were ignoring the settings passed into the
Runner init., and only usig the override settings passed to run()
This pull request adds a new script for docs, which generates missing
`ref/**/*.md` files. The script can be executed when you run `make
build-docs` command. The script does not do:
- overwrite the existing ones
- create files for _XXX.py and `__init__.py`

Note that the title part is generated like `tool_context` to `Tool
Context`. `openai_provider` will be `Openai Provider`, so some of them
needs a little manual work to adjust.

The direct need is to add `tool_context.md` for
#1043 but it should
be useful for future maintenance.
Used the wrong action 🤦
Action isn't published yet, so gotta do this
1. **Grammar fix**: Remove duplicate "can" in the sentence about
configuring trace names
2. **Correct default value**: Update "Agent trace" to "Agent workflow"
to match the actual default value in the codebase
Automated update of translated documentation
## Summary
- document `agents.realtime.model` so the RealtimeModel link works
- include the new file in the documentation navigation

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`


------
https://chatgpt.com/codex/tasks/task_i_687fadfee88883219240b56e5abba76a
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
## Summary
- Added LiteLLM to the acknowledgements section in README.md
- Recognized LiteLLM as a unified interface for 100+ LLMs, which aligns
with the SDK's provider-agnostic approach

## Test plan
- [x] Verify README renders correctly
- [x] Ensure link to LiteLLM repository is functional
The current script gives

```
Traceback (most recent call last):
  File "/Users/xmxd289/code/openai-agents-python/examples/reasoning_content/main.py", line 19, in <module>
    from agents.types import ResponseOutputRefusal, ResponseOutputText  # type: ignore
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'agents.types'
```

because it should be

`from openai.types.responses import ResponseOutputRefusal,
ResponseOutputText`

rather than

`from agents.types import ResponseOutputRefusal, ResponseOutputText`

---------

Co-authored-by: Michelangelo D'Agostino <[email protected]>
Will need this for a followup.

---
[//]: # (BEGIN SAPLING FOOTER)
* #1243
* #1242
* __->__ #1235
…behavior (#1233)

The Trace class was using @abc.abstractmethod decorators without
inheriting
from abc.ABC, which meant the abstract methods weren't enforced.

This change makes the class properly abstract while maintaining all
existing functionality
since no code directly instantiates Trace() - all usage goes through the
concrete implementations NoOpTrace and TraceImpl.
Adds LangDB AI Gateway to the External tracing processors list so
developers can stream Agent‑SDK traces directly into LangDB’s
dashboards.

## Highlights

- End‑to‑end observability of every agent step, tool invocation, and
guardrail.
- Access to 350+ LLM models through a single OpenAI‑compatible endpoint.

- Quick integration: `pip install "pylangdb[openai]" `
```python
from pylangdb.openai import init
init()

client = AsyncOpenAI(
    api_key=os.environ["LANGDB_API_KEY"],
    base_url=os.environ["LANGDB_API_BASE_URL"],
    default_headers={"x-project-id": os.environ["LANGDB_PROJECT_ID"]},
)
set_default_openai_client(client)     
```
- Live demo Thread:
https://app.langdb.ai/sharing/threads/53b87631-de7f-431a-a049-48556f899b4d
<img width="1636" height="903" alt="image"
src="https://github.com/user-attachments/assets/075538fb-c1af-48e8-95fd-ff3d729ba37d"
/>

Fixes #1222

---------

Co-authored-by: mutahirshah11 <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Updated the error message to replace the placeholder `your_type` with
`YourType`, removing the underscore and adopting PascalCase, which
aligns with standard Python type naming conventions.
The Financial Research Agent example is broken because it uses the model `gpt-4.5-preview-2025-02-27`. This model has been [removed and is no longer available](https://platform.openai.com/docs/deprecations#2025-04-14-gpt-4-5-preview).

This change follows the recommendation to replace gpt-4.5-preview with gpt-4.1.

---------
Co-authored-by: Kazuhiro Sera <[email protected]>
#1098)

### 1. Description

This PR fixes an issue where reasoning content from models accessed via
LiteLLM was not being correctly parsed into the `ChatCompletionMessage`
format. This was particularly noticeable when using reasoning models.

### 2. Context

I am using the `openai-agents-python` library in my project, and it has
been incredibly helpful. Thank you for building such a great tool!

My setup uses `litellm` to interface with `gemini-2.5-pro`. I noticed
that while the agent could receive a response, the reasoning(thinking)
from the Gemini model was lost during the conversion process from the
LiteLLM response format to the OpenAI `ChatCompletionMessage` object.

I saw that PR #871 made progress on a similar issue, but it seems the
specific response structure from LiteLLM still requires a small
adaptation. This fix adds the necessary logic to ensure that these
responses are handled.

**Relates to:** #871

### 3. Key Changes

- `LitellmConverter.convert_message_to_openai`: add `reasoing_content`
-  `Converter.items_to_messages`: just pass the reasoning item
serialx and others added 30 commits September 23, 2025 07:32
This pull request aims to resolve #556
## Background 

Currently, the `RunHooks` lifecycle (`on_tool_start`, `on_tool_end`)
exposes the `Tool` and `ToolContext`, but does not include the actual
arguments passed to the tool call.

resolves #939

## Solution

This implementation is inspired by [PR
#1598](#1598).

* Add a new `tool_arguments` field to `ToolContext` and populate it via
from_agent_context with tool_call.arguments.
* Update `lifecycle_example.py` to demonstrate tool_arguments in hooks
* Unlike the proposal in [PR
#253](#253), this
solution is not expected to introduce breaking changes, making it easier
to adopt.
Update the user agent override context var to override headers generally
instead of just the ua header. This allows us to pass in rich header
info from other OA sdks running alongside Agents SDK.
- This PR was started from [PR 1606: Tool
Guardrails](#1606)
- It adds input and output guardrails at the tool level which can
trigger `ToolInputGuardrailTripwireTriggered` and
`ToolOutputGuardrailTripwireTriggered` exceptions
- It includes updated documentation, a runnable example, and unit tests
- `make check` and unit tests all pass

## Edits since last review:
- Extracted nested tool running logic in `_run_impl.py`
- Added rejecting tool call or tool call output and returning a message
to the model (rather than only raising an exception)
- Added the tool guardrail results to the `RunResult`
- Removed docs
Co-authored-by: Kazuhiro Sera <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.