Skip to content

Commit 829d8c0

Browse files
authored
Merge pull request #1207 from open-webui/dev
2 parents b564b3e + 6df2053 commit 829d8c0

24 files changed

Lines changed: 759 additions & 153 deletions

File tree

docs/faq.mdx

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,16 @@ For more details on enterprise solutions and branding customizations, [click her
4242

4343
**A:** You can access the **File Manager** by going to **Settings > Data Controls > Manage Files > Manage**. This dashboard allows you to search through all your uploaded documents, view their details, and delete them. Deleting a file here also automatically cleans up any associated Knowledge Base entries and vector embeddings.
4444

45+
### Q: I get "The prompt is too long" / "context length exceeded" after a while in a chat. How do I fix it?
46+
47+
**A:** This error comes from the **model provider**, not from Open WebUI — the provider counts the tokens of everything you sent (system prompt + the *entire* chat history + attached files + tool calls + your new message) and rejects the request once it exceeds the model's context window. The "prompt" the model sees is the whole conversation, not just your latest message.
48+
49+
Open WebUI intentionally does **not** ship a built-in context trimmer. Every model has a different tokenizer and a different context window, and every deployment wants a different truncation policy (by tokens, by turns, by message count, file-attachments-first, summarize-and-replace, per-model budgets, and so on). There is no single policy that is correct for every user, so we expose the hook instead of choosing one for you.
50+
51+
Context management is done with [filter Functions](/features/extensibility/plugin/functions/filter): `inlet()` receives the full `body["messages"]` on every request and can modify it freely (drop old turns, enforce a turn limit, summarize, trim attachments, etc.). Many community-maintained context filters are already available one-click on [openwebui.com](https://openwebui.com/) — browse, install, and tune the valves. If none fits, copy the closest one into **Admin Panel → Functions** and edit it.
52+
53+
For the full write-up with examples, see [Context Window / Prompt Too Long](/troubleshooting/context-window).
54+
4555
### Q: Can I use Open WebUI offline, in air-gapped networks, or in extreme environments like outer space?
4656

4757
**A:** **Yes.** Open WebUI is a self-hosted, **internet-independent AI platform** designed to work in **air-gapped networks**, **remote deployments**, and any environment where cloud-based systems are impractical or impossible. Whether you need to **run an LLM without internet**, deploy a **private AI with no cloud dependency**, or operate a **local AI chatbot offline**, Open WebUI supports all of these out of the box. It runs entirely on local hardware and does not make external calls by default.

docs/features/channels/index.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,10 @@ With [**native function calling**](/features/extensibility/plugin/tools#tool-cal
9494
9595
This removes the need to manually bridge information between private chats and shared channels. The AI does it for you.
9696

97+
:::tip Community action: Forward to Channel
98+
If you want a one-click path from a chat message into a channel, the community **[Forward to Channel](https://openwebui.com/posts/b60c1f03-e29c-47c0-862c-3741a382616e)** action adds a button to each assistant message that posts the reply (or a selection) into a channel of your choice. Useful for promoting good answers from private chats into team-visible spaces without copy-paste.
99+
:::
100+
97101
---
98102

99103
## Getting Started

docs/features/chat-conversations/chat-features/chatshare.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,6 @@ Note: You can change the permission level of your shared chats on the community
4646

4747
:::
4848

49-
Example of a shared chat to the community platform website: https://openwebui.com/c/iamg30/5e3c569f-905e-4d68-a96d-8a99cc65c90f
50-
5149
#### Copying a Share Link
5250

5351
When you select `Copy Link`, a unique share link is generated that can be shared with others.

docs/features/chat-conversations/chat-features/code-execution/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Open WebUI offers powerful code execution capabilities directly within your chat
77

88
## Key Features
99

10-
- **Code Interpreter Capability**: Enable models to autonomously write and execute Python code as part of their responses. Works with both Default Mode (XML-based) and Native Mode (tool calling via `execute_code`).
10+
- **Code Interpreter Capability**: Enable models to autonomously write and execute Python code as part of their responses. Runs via the `execute_code` tool in Native (Agentic) Mode — the only supported tool-calling mode. An older XML-based integration exists for legacy Default Mode but is unsupported; new deployments should use Native Mode.
1111

1212
- **Python Code Execution**: Run Python scripts directly in your browser using Pyodide, or on a server using Jupyter. Supports popular libraries like pandas and matplotlib with no setup required.
1313

docs/features/chat-conversations/chat-features/follow-up-prompts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,4 +44,4 @@ Controls what happens when you click a follow-up prompt.
4444

4545
## Regenerating Follow-Ups
4646

47-
If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Followups](https://openwebui.com/f/silentoplayz/regenerate_followups) action button from the community.
47+
If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Follow-ups](https://openwebui.com/posts/9b5ac6d6-dfd6-4cad-bc1d-5518b138f22d) action button from the community.

docs/features/chat-conversations/web-search/agentic-search.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,9 +37,9 @@ To unlock these features, your model must support native tool calling and have s
3737
5. **Use a Quality Model**: Ensure you're using a frontier model with strong reasoning capabilities for best results.
3838

3939
:::tip Model Capability, Default Features, and Chat Toggle
40-
In **Native Mode**, the `search_web` and `fetch_url` tools require both the **Web Search** capability to be enabled *and* **Web Search** to be checked under **Default Features** in the model settings (or toggled on in the chat). If either is missing, the tools will not be injected — even though other builtin tools may still appear.
40+
In **Native Mode** (the supported mode), the `search_web` and `fetch_url` tools require both the **Web Search** capability to be enabled *and* **Web Search** to be checked under **Default Features** in the model settings (or toggled on in the chat). If either is missing, the tools will not be injected — even though other builtin tools may still appear.
4141

42-
In **Default Mode** (non-native), the chat toggle controls whether web search is performed via RAG-style injection.
42+
Default Mode's RAG-style injection behavior is documented here only for legacy deployments. Default Mode is no longer supported; all models should be configured for Native Mode.
4343

4444
**Important**: If you disable the `web_search` capability on a model but use Native Mode, the tools won't be available even if you manually toggle Web Search on in the chat.
4545
:::

docs/features/extensibility/plugin/development/events.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -357,7 +357,7 @@ While this event can technically be emitted from any plugin type (tools, pipes,
357357
* **Chat Overview**: Favorited messages (pins) are highlighted in the conversation overview, making it easier for users to locate key information later.
358358

359359
#### Example: "Pin Message" Action
360-
For a practical implementation of this event in a real-world plugin, see the **[Pin Message Action on Open WebUI Community](https://openwebui.com/posts/pin_message_action_143594d1)**. This action demonstrates how to toggle the favorite status in the database and immediately sync the UI using the `chat:message:favorite` event.
360+
For a practical implementation of this event in a real-world plugin, see the **[Pin Message Action on Open WebUI Community](https://openwebui.com/posts/143594d1-0838-4f9a-9af2-b94d2952f7ba)**. This action demonstrates how to toggle the favorite status in the database and immediately sync the UI using the `chat:message:favorite` event.
361361

362362
---
363363

docs/features/extensibility/plugin/development/rich-ui.mdx

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -247,6 +247,19 @@ If your Rich UI embed needs to trigger downloads, interact with Open WebUI's fro
247247
As an alternative for ephemeral interactions that need full page access, consider using the [`execute` event](/features/extensibility/plugin/development/events#execute-works-with-both-__event_call__-and-__event_emitter__) instead, which runs unsandboxed in the main page context.
248248
:::
249249

250+
:::tip Community Showcase: Streaming Rich UI with same-origin
251+
If you want to see how far Rich UI can go when same-origin is enabled, take a look at the community **[Inline Visualizer v2](https://github.com/Classic298/open-webui-plugins)** tool (also on the community site via the [Show-and-tell discussion](https://github.com/open-webui/open-webui/discussions/23901)).
252+
253+
It demonstrates patterns that aren't in the basic docs:
254+
255+
- **Live streaming HTML/SVG.** The tool returns an empty wrapper; the model then emits markup inline between plain-text `@@@VIZ-START / @@@VIZ-END` markers in its normal response. A same-origin observer inside the iframe tails the parent chat's DOM, extracts the growing block, and reconciles new nodes into the iframe as tokens arrive — so dashboards and diagrams paint live, token-by-token, instead of popping in at the end of the stream.
256+
- **Bidirectional bridges.** `sendPrompt(text)` turns any clickable node into a follow-up user message. `saveState(k, v)` / `loadState(k, fallback)` proxies parent `localStorage` scoped per-message so sliders and toggles survive reloads. `copyText`, `toast(msg, kind)`, and `openLink` round it out.
257+
- **A shipped design system.** Theme-aware CSS variables, a 9-ramp color palette, SVG utility classes, auto light/dark adaptation, and 230 localized strings across 46 languages — all delivered from a single tool with no core changes.
258+
- **Incremental DOM reconciliation.** A safe-cut HTML parser flushes the longest valid prefix on every tick; the reconciler only appends new nodes so existing elements never re-mount and animations never re-trigger during the stream.
259+
260+
This is a useful reference when you're trying to decide whether a generative-UI / streaming-UI feature needs a core change or can live purely in plugin-land. (Spoiler: almost always the latter.)
261+
:::
262+
250263
## Rendering Position
251264

252265
- **Tool embeds** inside a tool call result render **inline** at the tool call indicator (the "View Result from..." line)

docs/features/extensibility/plugin/functions/action.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Action functions should always be defined as `async`. The backend is progressive
1717

1818
Actions are admin-managed functions that extend the chat interface with custom interactive capabilities. When a message is generated by a model that has actions configured, these actions appear as clickable buttons above the message.
1919

20-
A scaffold of Action code can be found [in the community section](https://openwebui.com/f/hub/custom_action/). For more Action Function examples built by the community, visit [https://openwebui.com/search](https://openwebui.com/search).
20+
A minimal scaffold is shown in the [Function Structure](#function-structure) section below. For real-world Action examples built by the community, browse [openwebui.com](https://openwebui.com/).
2121

2222
An example of a graph visualization Action can be seen in the video below.
2323

docs/features/extensibility/plugin/functions/pipe.mdx

Lines changed: 50 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,8 @@ Let's dive into a practical example where we'll create a Pipe that proxies reque
137137

138138
```python
139139
from pydantic import BaseModel, Field
140-
import requests
140+
import httpx
141+
141142

142143
class Pipe:
143144
class Valves(BaseModel):
@@ -157,40 +158,37 @@ class Pipe:
157158
def __init__(self):
158159
self.valves = self.Valves()
159160

160-
def pipes(self):
161-
if self.valves.OPENAI_API_KEY:
162-
try:
163-
headers = {
164-
"Authorization": f"Bearer {self.valves.OPENAI_API_KEY}",
165-
"Content-Type": "application/json",
166-
}
161+
async def pipes(self):
162+
if not self.valves.OPENAI_API_KEY:
163+
return [{"id": "error", "name": "API Key not provided."}]
167164

168-
r = requests.get(
165+
headers = {
166+
"Authorization": f"Bearer {self.valves.OPENAI_API_KEY}",
167+
"Content-Type": "application/json",
168+
}
169+
170+
try:
171+
async with httpx.AsyncClient() as client:
172+
r = await client.get(
169173
f"{self.valves.OPENAI_API_BASE_URL}/models", headers=headers
170174
)
175+
r.raise_for_status()
171176
models = r.json()
172-
return [
173-
{
174-
"id": model["id"],
175-
"name": f'{self.valves.NAME_PREFIX}{model.get("name", model["id"])}',
176-
}
177-
for model in models["data"]
178-
if "gpt" in model["id"]
179-
]
180-
181-
except Exception as e:
182-
return [
183-
{
184-
"id": "error",
185-
"name": "Error fetching models. Please check your API Key.",
186-
},
187-
]
188-
else:
177+
178+
return [
179+
{
180+
"id": model["id"],
181+
"name": f'{self.valves.NAME_PREFIX}{model.get("name", model["id"])}',
182+
}
183+
for model in models["data"]
184+
if "gpt" in model["id"]
185+
]
186+
except Exception:
189187
return [
190188
{
191189
"id": "error",
192-
"name": "API Key not provided.",
193-
},
190+
"name": "Error fetching models. Please check your API Key.",
191+
}
194192
]
195193

196194
async def pipe(self, body: dict, __user__: dict):
@@ -205,24 +203,35 @@ class Pipe:
205203

206204
# Update the model id in the body
207205
payload = {**body, "model": model_id}
208-
try:
209-
r = requests.post(
210-
url=f"{self.valves.OPENAI_API_BASE_URL}/chat/completions",
211-
json=payload,
212-
headers=headers,
213-
stream=True,
214-
)
215-
216-
r.raise_for_status()
206+
url = f"{self.valves.OPENAI_API_BASE_URL}/chat/completions"
217207

208+
try:
218209
if body.get("stream", False):
219-
return r.iter_lines()
220-
else:
210+
async def event_stream():
211+
async with httpx.AsyncClient(timeout=None) as client:
212+
async with client.stream(
213+
"POST", url, json=payload, headers=headers
214+
) as r:
215+
r.raise_for_status()
216+
async for line in r.aiter_lines():
217+
yield line
218+
219+
return event_stream()
220+
221+
async with httpx.AsyncClient(timeout=None) as client:
222+
r = await client.post(url, json=payload, headers=headers)
223+
r.raise_for_status()
221224
return r.json()
222225
except Exception as e:
223226
return f"Error: {e}"
224227
```
225228

229+
:::tip Use an async HTTP client
230+
This example uses [`httpx.AsyncClient`](https://www.python-httpx.org/async/) instead of `requests` because both `pipes()` and `pipe()` run inside Open WebUI's async event loop. Calling the synchronous `requests` library from an `async def` method blocks the loop for the full duration of the HTTP request (and, for streaming, the entire stream), which starves every other concurrent request on the instance. `httpx` is async-native, already a dependency, and a drop-in replacement for the common patterns.
231+
232+
If you must use a synchronous third-party library in an async handler, wrap the blocking call with `await anyio.to_thread.run_sync(...)` so it runs on a worker thread instead of the event loop.
233+
:::
234+
226235
### Detailed Breakdown
227236

228237
#### Valves Configuration
@@ -261,8 +270,8 @@ class Pipe:
261270
1. **Prepare Headers**: Sets up the headers with the API key and content type.
262271
2. **Extract Model ID**: Extracts the actual model ID from the selected model name.
263272
3. **Prepare Payload**: Updates the body with the correct model ID.
264-
4. **Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint.
265-
5. **Handle Streaming**: If `stream` is `True`, returns an iterable of lines.
273+
4. **Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint via an `httpx.AsyncClient`.
274+
5. **Handle Streaming**: If `stream` is `True`, returns an async generator that yields SSE lines from the upstream response.
266275
6. **Error Handling**: Catches exceptions and returns an error message.
267276

268277
### Extending the Proxy Pipe

0 commit comments

Comments
 (0)