Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ pip/uv-add "pydantic-ai-slim[openai]"
* `tavily` - installs `tavily-python` [PyPI ↗](https://pypi.org/project/tavily-python){:target="_blank"}
* `cli` - installs `rich` [PyPI ↗](https://pypi.org/project/rich){:target="_blank"}, `prompt-toolkit` [PyPI ↗](https://pypi.org/project/prompt-toolkit){:target="_blank"}, and `argcomplete` [PyPI ↗](https://pypi.org/project/argcomplete){:target="_blank"}
* `mcp` - installs `mcp` [PyPI ↗](https://pypi.org/project/mcp){:target="_blank"}
* `fastmcp` - installs `fastmcp` [PyPI ↗](https://pypi.org/project/fastmcp){:target="_blank"}
* `a2a` - installs `fasta2a` [PyPI ↗](https://pypi.org/project/fasta2a){:target="_blank"}
* `ag-ui` - installs `ag-ui-protocol` [PyPI ↗](https://pypi.org/project/ag-ui-protocol){:target="_blank"} and `starlette` [PyPI ↗](https://pypi.org/project/starlette){:target="_blank"}
* `dbos` - installs [`dbos`](durable_execution/dbos.md) [PyPI ↗](https://pypi.org/project/dbos){:target="_blank"}
Expand Down
66 changes: 66 additions & 0 deletions docs/toolsets.md
Original file line number Diff line number Diff line change
Expand Up @@ -663,6 +663,72 @@ If you want to reuse a network connection or session across tool listings and ca

See the [MCP Client](./mcp/client.md) documentation for how to use MCP servers with Pydantic AI.

### FastMCP Tools {#fastmcp-tools}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: before merging this, see if it makes sense to move this to a separate doc that can be listed in the "MCP" section in the sidebar.


The [FastMCP](https://fastmcp.dev) Client can also be used with Pydantic AI with the provided [`FastMCPToolset`][pydantic_ai.toolsets.fastmcp.FastMCPToolset] [toolset](toolsets.md).

To use the `FastMCPToolset`, you will need to install `pydantic-ai-slim[fastmcp]`.

A FastMCP Toolset can be created from:
- A FastMCP Client: `FastMCPToolset(client=Client(...))`
- A FastMCP Transport: `FastMCPToolset(StdioTransport(command='uv', args=['run', 'mcp-run-python', 'stdio']))`
- A FastMCP Server: `FastMCPToolset(FastMCP('my_server'))`
- An HTTP URL: `FastMCPToolset('http://localhost:8000/mcp')`
- An SSE URL: `FastMCPToolset('http://localhost:8000/sse')`
- A Python Script: `FastMCPToolset('my_server.py')`
- A Node.js Script: `FastMCPToolset('my_server.js')`
- A JSON MCP Configuration: `FastMCPToolset({'mcpServers': {'my_server': {'command': 'python', 'args': ['-c', 'print("test")']}}})`

Connecting your agent to an HTTP MCP Server is as simple as:

```python {test="skip"}
from pydantic_ai import Agent
from pydantic_ai.toolsets.fastmcp import FastMCPToolset

toolset = FastMCPToolset('http://localhost:8000/mcp')

agent = Agent('openai:gpt-5', toolsets=[toolset])
```

You can also create a toolset from a JSON MCP Configuration. FastMCP supports additional capabilities on top of the MCP specification, like Tool Transformation in the MCP configuration that you can take advantage of with the `FastMCPToolset`.

```python {test="skip"}
from pydantic_ai import Agent
from pydantic_ai.toolsets.fastmcp import FastMCPToolset

mcp_config = {
'mcpServers': {
'time_mcp_server': {
'command': 'uvx',
'args': ['mcp-server-time']
}
}
}

toolset = FastMCPToolset(mcp_config)

agent = Agent('openai:gpt-5', toolsets=[toolset])
```

Toolsets can also be created from a FastMCP Server:

```python {test="skip"}
from fastmcp import FastMCP

from pydantic_ai import Agent
from pydantic_ai.toolsets.fastmcp import FastMCPToolset

fastmcp_server = FastMCP('my_server')
@fastmcp_server.tool()
async def my_tool(a: int, b: int) -> int:
return a + b

toolset = FastMCPToolset(fastmcp_server)

agent = Agent('openai:gpt-5', toolsets=[toolset])
```


### LangChain Tools {#langchain-tools}

If you'd like to use tools or a [toolkit](https://python.langchain.com/docs/concepts/tools/#toolkits) from LangChain's [community tool library](https://python.langchain.com/docs/integrations/tools/) with Pydantic AI, you can use the [`LangChainToolset`][pydantic_ai.ext.langchain.LangChainToolset] which takes a list of LangChain tools. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the LangChain tool, and up to the LangChain tool to raise an error if the arguments are invalid.
Expand Down
231 changes: 231 additions & 0 deletions pydantic_ai_slim/pydantic_ai/toolsets/fastmcp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@
from __future__ import annotations

import base64
from asyncio import Lock
from contextlib import AsyncExitStack
from dataclasses import KW_ONLY, dataclass, field
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal, overload

from fastmcp.client.transports import ClientTransport
from fastmcp.mcp_config import MCPConfig
from fastmcp.server import FastMCP
from mcp.server.fastmcp import FastMCP as FastMCP1Server
from pydantic import AnyUrl
from typing_extensions import Self

from pydantic_ai import messages
from pydantic_ai.exceptions import ModelRetry
from pydantic_ai.tools import AgentDepsT, RunContext, ToolDefinition
from pydantic_ai.toolsets import AbstractToolset
from pydantic_ai.toolsets.abstract import ToolsetTool

try:
from fastmcp.client import Client
from fastmcp.exceptions import ToolError
from mcp.types import (
AudioContent,
ContentBlock,
ImageContent,
TextContent,
Tool as MCPTool,
)

from pydantic_ai.mcp import TOOL_SCHEMA_VALIDATOR

except ImportError as _import_error:
raise ImportError(
'Please install the `fastmcp` package to use the FastMCP server, '
'you can use the `fastmcp` optional group — `pip install "pydantic-ai-slim[fastmcp]"`'
) from _import_error


if TYPE_CHECKING:
from fastmcp.client.client import CallToolResult


FastMCPToolResult = messages.BinaryContent | dict[str, Any] | str | None

ToolErrorBehavior = Literal['model_retry', 'error']


@dataclass
class FastMCPToolset(AbstractToolset[AgentDepsT]):
"""A FastMCP Toolset that uses the FastMCP Client to call tools from a local or remote MCP Server.
The Toolset can accept a FastMCP Client, a FastMCP Transport, or any other object which a FastMCP Transport can be created from.
See https://gofastmcp.com/clients/transports for a full list of transports available.
"""

client: Client[Any]
"""The FastMCP transport to use. This can be a local or remote MCP Server configuration, a transport string, or a FastMCP Client."""

_: KW_ONLY

tool_error_behavior: Literal['model_retry', 'error'] = field(default='error')
"""The behavior to take when a tool error occurs."""

max_retries: int = field(default=2)
"""The maximum number of retries to attempt if a tool call fails."""

_id: str | None = field(default=None)

@overload
def __init__(
self,
*,
client: Client[Any],
max_retries: int = 2,
tool_error_behavior: Literal['model_retry', 'error'] = 'error',
id: str | None = None,
) -> None: ...

@overload
def __init__(
self,
transport: ClientTransport
| FastMCP
| FastMCP1Server
| AnyUrl
| Path
| MCPConfig
| dict[str, Any]
| str
| None = None,
*,
max_retries: int = 2,
tool_error_behavior: Literal['model_retry', 'error'] = 'error',
id: str | None = None,
) -> None: ...

def __init__(
self,
transport: ClientTransport
| FastMCP
| FastMCP1Server
| AnyUrl
| Path
| MCPConfig
| dict[str, Any]
| str
| None = None,
*,
client: Client[Any] | None = None,
max_retries: int = 2,
tool_error_behavior: Literal['model_retry', 'error'] = 'error',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer for the default to be model_retry for consistency with MCPServer.

id: str | None = None,
) -> None:
if not client and not transport:
raise ValueError('Either client or transport must be provided')

if client and transport:
raise ValueError('Either client or transport must be provided, not both')

if client:
self.client = client
else:
self.client = Client[Any](transport=transport)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing this will raise an error if the dict[str,Any] or str doesn't look like it should?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I can add tests for this, most common error raised will be a ValueError(f"Could not infer a valid transport from: {transport}")


self._id = id
self.max_retries = max_retries
self.tool_error_behavior = tool_error_behavior

self._enter_lock: Lock = Lock()
self._running_count: int = 0
self._exit_stack: AsyncExitStack | None = None

@property
def id(self) -> str | None:
return self._id

async def __aenter__(self) -> Self:
async with self._enter_lock:
if self._running_count == 0 and self.client:
self._exit_stack = AsyncExitStack()
await self._exit_stack.enter_async_context(self.client)

self._running_count += 1

return self

async def __aexit__(self, *args: Any) -> bool | None:
async with self._enter_lock:
self._running_count -= 1
if self._running_count == 0 and self._exit_stack:
await self._exit_stack.aclose()
self._exit_stack = None

return None

async def get_tools(self, ctx: RunContext[AgentDepsT]) -> dict[str, ToolsetTool[AgentDepsT]]:
async with self:
mcp_tools: list[MCPTool] = await self.client.list_tools()

return {
tool.name: _convert_mcp_tool_to_toolset_tool(toolset=self, mcp_tool=tool, retries=self.max_retries)
for tool in mcp_tools
}

async def call_tool(
self, name: str, tool_args: dict[str, Any], ctx: RunContext[AgentDepsT], tool: ToolsetTool[AgentDepsT]
) -> Any:
async with self:
try:
call_tool_result: CallToolResult = await self.client.call_tool(name=name, arguments=tool_args)
except ToolError as e:
if self.tool_error_behavior == 'model_retry':
raise ModelRetry(message=str(e)) from e
else:
raise e

# If we have structured content, return that
if call_tool_result.structured_content:
return call_tool_result.structured_content

# Otherwise, return the content
return _map_fastmcp_tool_results(parts=call_tool_result.content)


def _convert_mcp_tool_to_toolset_tool(
toolset: FastMCPToolset[AgentDepsT],
mcp_tool: MCPTool,
retries: int,
) -> ToolsetTool[AgentDepsT]:
"""Convert an MCP tool to a toolset tool."""
return ToolsetTool[AgentDepsT](
tool_def=ToolDefinition(
name=mcp_tool.name,
description=mcp_tool.description,
parameters_json_schema=mcp_tool.inputSchema,
metadata={
'meta': mcp_tool.meta,
'annotations': mcp_tool.annotations.model_dump() if mcp_tool.annotations else None,
'output_schema': mcp_tool.outputSchema or None,
},
),
toolset=toolset,
max_retries=retries,
args_validator=TOOL_SCHEMA_VALIDATOR,
)


def _map_fastmcp_tool_results(parts: list[ContentBlock]) -> list[FastMCPToolResult] | FastMCPToolResult:
"""Map FastMCP tool results to toolset tool results."""
mapped_results = [_map_fastmcp_tool_result(part) for part in parts]

if len(mapped_results) == 1:
return mapped_results[0]

return mapped_results


def _map_fastmcp_tool_result(part: ContentBlock) -> FastMCPToolResult:
if isinstance(part, TextContent):
return part.text

if isinstance(part, ImageContent | AudioContent):
return messages.BinaryContent(data=base64.b64decode(part.data), media_type=part.mimeType)

msg = f'Unsupported/Unknown content block type: {type(part)}' # pragma: no cover
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the other types?

In mcp.py, we have to specifically account for embedded resources or resource links (which we're possibly doing incorrectly: #2288)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably just those two, will take a look

raise ValueError(msg) # pragma: no cover)
2 changes: 2 additions & 0 deletions pydantic_ai_slim/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,8 @@ cli = [
]
# MCP
mcp = ["mcp>=1.12.3"]
# FastMCP
fastmcp = ["fastmcp>=2.12.0"]
# Evals
evals = ["pydantic-evals=={{ version }}"]
# A2A
Expand Down
3 changes: 2 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ requires-python = ">=3.10"

[tool.hatch.metadata.hooks.uv-dynamic-versioning]
dependencies = [
"pydantic-ai-slim[openai,vertexai,google,groq,anthropic,mistral,cohere,bedrock,huggingface,cli,mcp,evals,ag-ui,retries,temporal,logfire]=={{ version }}",
"pydantic-ai-slim[openai,vertexai,google,groq,anthropic,mistral,cohere,bedrock,huggingface,cli,mcp,fastmcp,evals,ag-ui,retries,temporal,logfire]=={{ version }}",
]

[tool.hatch.metadata.hooks.uv-dynamic-versioning.optional-dependencies]
Expand Down Expand Up @@ -90,6 +90,7 @@ dev = [
"coverage[toml]>=7.10.3",
"dirty-equals>=0.9.0",
"duckduckgo-search>=7.0.0",
"fastmcp>=2.12.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need it here if it's already coming in through pydantic-ai-slim[...,fastmcp,...] above

"inline-snapshot>=0.19.3",
"pytest>=8.3.3",
"pytest-examples>=0.0.18",
Expand Down
Loading
Loading