Skip to content

Commit 471eb67

Browse files
committed
Slim initial python API docs for tools and chains, closes #1022
1 parent 92b0c6b commit 471eb67

File tree

2 files changed

+47
-1
lines changed

2 files changed

+47
-1
lines changed

docs/python-api.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,50 @@ If you have set a `OPENAI_API_KEY` environment variable you can omit the `model.
4545

4646
Calling `llm.get_model()` with an invalid model ID will raise a `llm.UnknownModelError` exception.
4747

48+
(python-api-tools)=
49+
50+
### Tools
51+
52+
{ref}`Tools <tools>` are functions that can be executed by the model as part of a chain of responses.
53+
54+
You can define tools in Python code - with a docstring to describe what they do - and then pass them to the `model.prompt()` method using the `tools=` keyword argument. If the model decides to request a tool call the `response.tool_calls()` method show what the model wants to execute:
55+
56+
```python
57+
import llm
58+
59+
def upper(text: str) -> str:
60+
"""Convert text to uppercase."""
61+
return text.upper()
62+
63+
model = llm.get_model("gpt-4.1-mini")
64+
response = model.prompt("Convert panda to upper", tools=[upper])
65+
tool_calls = response.tool_calls()
66+
# [ToolCall(name='upper', arguments={'text': 'panda'}, tool_call_id='...')]
67+
```
68+
You can call `response.execute_tool_calls()` to execute those calls and get back the results:
69+
```python
70+
tool_results = response.execute_tool_calls()
71+
# [ToolResult(name='upper', output='PANDA', tool_call_id='...')]
72+
```
73+
To pass the results of the tool calls back to the model you need to use a utility method called `model.chain()`:
74+
```python
75+
chain_response = model.chain(
76+
"Convert panda to upper",
77+
tools=[upper],
78+
)
79+
print(chain_response.text())
80+
# The word "panda" converted to uppercase is "PANDA".
81+
```
82+
You can also loop through the `model.chain()` response to get a stream of tokens, like this:
83+
```python
84+
for chunk in model.chain(
85+
"Convert panda to upper",
86+
tools=[upper],
87+
):
88+
print(chunk, end="", flush=True)
89+
```
90+
This will stream each of the chain of responses in turn as they are generated.
91+
4892
(python-api-system-prompts)=
4993

5094
### System prompts

docs/tools.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,6 @@ In LLM every tool is a defined as a Python function. The function can take any n
2020

2121
Tool functions should include a docstring that describes what the function does. This docstring will become the description that is passed to the model.
2222

23-
The Python API can accept functions directly. The command-line interface has two ways for tools to be defined: via plugins that implement the {ref}`register_tools() plugin hook <plugin-hooks-register-tools>`, or directly on the command-line using the `--functions` argument to specify a block of Python code defining one or more functions - or a path to a Python file containing the same.
23+
The Python API can accept functions directly. The command-line interface has two ways for tools to be defined: via plugins that implement the {ref}`register_tools() plugin hook <plugin-hooks-register-tools>`, or directly on the command-line using the `--functions` argument to specify a block of Python code defining one or more functions - or a path to a Python file containing the same.
24+
25+
You can use tools {ref}`with the LLM command-line tool <usage-tools>` or {ref}`with the Python API <python-api-tools>`.

0 commit comments

Comments
 (0)