You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/faq.mdx
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,16 @@ For more details on enterprise solutions and branding customizations, [click her
42
42
43
43
**A:** You can access the **File Manager** by going to **Settings > Data Controls > Manage Files > Manage**. This dashboard allows you to search through all your uploaded documents, view their details, and delete them. Deleting a file here also automatically cleans up any associated Knowledge Base entries and vector embeddings.
44
44
45
+
### Q: I get "The prompt is too long" / "context length exceeded" after a while in a chat. How do I fix it?
46
+
47
+
**A:** This error comes from the **model provider**, not from Open WebUI — the provider counts the tokens of everything you sent (system prompt + the *entire* chat history + attached files + tool calls + your new message) and rejects the request once it exceeds the model's context window. The "prompt" the model sees is the whole conversation, not just your latest message.
48
+
49
+
Open WebUI intentionally does **not** ship a built-in context trimmer. Every model has a different tokenizer and a different context window, and every deployment wants a different truncation policy (by tokens, by turns, by message count, file-attachments-first, summarize-and-replace, per-model budgets, and so on). There is no single policy that is correct for every user, so we expose the hook instead of choosing one for you.
50
+
51
+
Context management is done with [filter Functions](/features/extensibility/plugin/functions/filter): `inlet()` receives the full `body["messages"]` on every request and can modify it freely (drop old turns, enforce a turn limit, summarize, trim attachments, etc.). Many community-maintained context filters are already available one-click on [openwebui.com](https://openwebui.com/) — browse, install, and tune the valves. If none fits, copy the closest one into **Admin Panel → Functions** and edit it.
52
+
53
+
For the full write-up with examples, see [Context Window / Prompt Too Long](/troubleshooting/context-window).
54
+
45
55
### Q: Can I use Open WebUI offline, in air-gapped networks, or in extreme environments like outer space?
46
56
47
57
**A:****Yes.** Open WebUI is a self-hosted, **internet-independent AI platform** designed to work in **air-gapped networks**, **remote deployments**, and any environment where cloud-based systems are impractical or impossible. Whether you need to **run an LLM without internet**, deploy a **private AI with no cloud dependency**, or operate a **local AI chatbot offline**, Open WebUI supports all of these out of the box. It runs entirely on local hardware and does not make external calls by default.
Copy file name to clipboardExpand all lines: docs/features/channels/index.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,6 +94,10 @@ With [**native function calling**](/features/extensibility/plugin/tools#tool-cal
94
94
95
95
This removes the need to manually bridge information between private chats and shared channels. The AI does it for you.
96
96
97
+
:::tip Community action: Forward to Channel
98
+
If you want a one-click path from a chat message into a channel, the community **[Forward to Channel](https://openwebui.com/posts/b60c1f03-e29c-47c0-862c-3741a382616e)** action adds a button to each assistant message that posts the reply (or a selection) into a channel of your choice. Useful for promoting good answers from private chats into team-visible spaces without copy-paste.
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/chat-features/code-execution/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ Open WebUI offers powerful code execution capabilities directly within your chat
7
7
8
8
## Key Features
9
9
10
-
-**Code Interpreter Capability**: Enable models to autonomously write and execute Python code as part of their responses. Works with both Default Mode (XML-based) and Native Mode (tool calling via `execute_code`).
10
+
-**Code Interpreter Capability**: Enable models to autonomously write and execute Python code as part of their responses. Runs via the `execute_code` tool in Native (Agentic) Mode — the only supported tool-calling mode. An older XML-based integration exists for legacy Default Mode but is unsupported; new deployments should use Native Mode.
11
11
12
12
-**Python Code Execution**: Run Python scripts directly in your browser using Pyodide, or on a server using Jupyter. Supports popular libraries like pandas and matplotlib with no setup required.
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/chat-features/follow-up-prompts.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,4 +44,4 @@ Controls what happens when you click a follow-up prompt.
44
44
45
45
## Regenerating Follow-Ups
46
46
47
-
If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Followups](https://openwebui.com/f/silentoplayz/regenerate_followups) action button from the community.
47
+
If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Follow-ups](https://openwebui.com/posts/9b5ac6d6-dfd6-4cad-bc1d-5518b138f22d) action button from the community.
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/web-search/agentic-search.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,9 +37,9 @@ To unlock these features, your model must support native tool calling and have s
37
37
5.**Use a Quality Model**: Ensure you're using a frontier model with strong reasoning capabilities for best results.
38
38
39
39
:::tip Model Capability, Default Features, and Chat Toggle
40
-
In **Native Mode**, the `search_web` and `fetch_url` tools require both the **Web Search** capability to be enabled *and***Web Search** to be checked under **Default Features** in the model settings (or toggled on in the chat). If either is missing, the tools will not be injected — even though other builtin tools may still appear.
40
+
In **Native Mode** (the supported mode), the `search_web` and `fetch_url` tools require both the **Web Search** capability to be enabled *and***Web Search** to be checked under **Default Features** in the model settings (or toggled on in the chat). If either is missing, the tools will not be injected — even though other builtin tools may still appear.
41
41
42
-
In **Default Mode** (non-native), the chat toggle controls whether web search is performed via RAG-style injection.
42
+
Default Mode's RAG-style injection behavior is documented here only for legacy deployments. Default Mode is no longer supported; all models should be configured for Native Mode.
43
43
44
44
**Important**: If you disable the `web_search` capability on a model but use Native Mode, the tools won't be available even if you manually toggle Web Search on in the chat.
Copy file name to clipboardExpand all lines: docs/features/extensibility/plugin/development/events.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -357,7 +357,7 @@ While this event can technically be emitted from any plugin type (tools, pipes,
357
357
***Chat Overview**: Favorited messages (pins) are highlighted in the conversation overview, making it easier for users to locate key information later.
358
358
359
359
#### Example: "Pin Message" Action
360
-
For a practical implementation of this event in a real-world plugin, see the **[Pin Message Action on Open WebUI Community](https://openwebui.com/posts/pin_message_action_143594d1)**. This action demonstrates how to toggle the favorite status in the database and immediately sync the UI using the `chat:message:favorite` event.
360
+
For a practical implementation of this event in a real-world plugin, see the **[Pin Message Action on Open WebUI Community](https://openwebui.com/posts/143594d1-0838-4f9a-9af2-b94d2952f7ba)**. This action demonstrates how to toggle the favorite status in the database and immediately sync the UI using the `chat:message:favorite` event.
Copy file name to clipboardExpand all lines: docs/features/extensibility/plugin/development/rich-ui.mdx
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -247,6 +247,19 @@ If your Rich UI embed needs to trigger downloads, interact with Open WebUI's fro
247
247
As an alternative for ephemeral interactions that need full page access, consider using the [`execute` event](/features/extensibility/plugin/development/events#execute-works-with-both-__event_call__-and-__event_emitter__) instead, which runs unsandboxed in the main page context.
248
248
:::
249
249
250
+
:::tip Community Showcase: Streaming Rich UI with same-origin
251
+
If you want to see how far Rich UI can go when same-origin is enabled, take a look at the community **[Inline Visualizer v2](https://github.com/Classic298/open-webui-plugins)** tool (also on the community site via the [Show-and-tell discussion](https://github.com/open-webui/open-webui/discussions/23901)).
252
+
253
+
It demonstrates patterns that aren't in the basic docs:
254
+
255
+
-**Live streaming HTML/SVG.** The tool returns an empty wrapper; the model then emits markup inline between plain-text `@@@VIZ-START / @@@VIZ-END` markers in its normal response. A same-origin observer inside the iframe tails the parent chat's DOM, extracts the growing block, and reconciles new nodes into the iframe as tokens arrive — so dashboards and diagrams paint live, token-by-token, instead of popping in at the end of the stream.
256
+
-**Bidirectional bridges.**`sendPrompt(text)` turns any clickable node into a follow-up user message. `saveState(k, v)` / `loadState(k, fallback)` proxies parent `localStorage` scoped per-message so sliders and toggles survive reloads. `copyText`, `toast(msg, kind)`, and `openLink` round it out.
257
+
-**A shipped design system.** Theme-aware CSS variables, a 9-ramp color palette, SVG utility classes, auto light/dark adaptation, and 230 localized strings across 46 languages — all delivered from a single tool with no core changes.
258
+
-**Incremental DOM reconciliation.** A safe-cut HTML parser flushes the longest valid prefix on every tick; the reconciler only appends new nodes so existing elements never re-mount and animations never re-trigger during the stream.
259
+
260
+
This is a useful reference when you're trying to decide whether a generative-UI / streaming-UI feature needs a core change or can live purely in plugin-land. (Spoiler: almost always the latter.)
261
+
:::
262
+
250
263
## Rendering Position
251
264
252
265
-**Tool embeds** inside a tool call result render **inline** at the tool call indicator (the "View Result from..." line)
Copy file name to clipboardExpand all lines: docs/features/extensibility/plugin/functions/action.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Action functions should always be defined as `async`. The backend is progressive
17
17
18
18
Actions are admin-managed functions that extend the chat interface with custom interactive capabilities. When a message is generated by a model that has actions configured, these actions appear as clickable buttons above the message.
19
19
20
-
A scaffold of Action code can be found [in the community section](https://openwebui.com/f/hub/custom_action/). For more Action Function examples built by the community, visit [https://openwebui.com/search](https://openwebui.com/search).
20
+
A minimal scaffold is shown in the [Function Structure](#function-structure) section below. For real-world Action examples built by the community, browse [openwebui.com](https://openwebui.com/).
21
21
22
22
An example of a graph visualization Action can be seen in the video below.
asyncwith httpx.AsyncClient(timeout=None) as client:
212
+
asyncwith client.stream(
213
+
"POST", url, json=payload, headers=headers
214
+
) as r:
215
+
r.raise_for_status()
216
+
asyncfor line in r.aiter_lines():
217
+
yield line
218
+
219
+
return event_stream()
220
+
221
+
asyncwith httpx.AsyncClient(timeout=None) as client:
222
+
r =await client.post(url, json=payload, headers=headers)
223
+
r.raise_for_status()
221
224
return r.json()
222
225
exceptExceptionas e:
223
226
returnf"Error: {e}"
224
227
```
225
228
229
+
:::tip Use an async HTTP client
230
+
This example uses [`httpx.AsyncClient`](https://www.python-httpx.org/async/) instead of `requests` because both `pipes()` and `pipe()` run inside Open WebUI's async event loop. Calling the synchronous `requests` library from an `async def` method blocks the loop for the full duration of the HTTP request (and, for streaming, the entire stream), which starves every other concurrent request on the instance. `httpx` is async-native, already a dependency, and a drop-in replacement for the common patterns.
231
+
232
+
If you must use a synchronous third-party library in an async handler, wrap the blocking call with `await anyio.to_thread.run_sync(...)` so it runs on a worker thread instead of the event loop.
233
+
:::
234
+
226
235
### Detailed Breakdown
227
236
228
237
#### Valves Configuration
@@ -261,8 +270,8 @@ class Pipe:
261
270
1.**Prepare Headers**: Sets up the headers with the API key and content type.
262
271
2.**Extract Model ID**: Extracts the actual model ID from the selected model name.
263
272
3.**Prepare Payload**: Updates the body with the correct model ID.
264
-
4.**Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint.
265
-
5.**Handle Streaming**: If `stream` is `True`, returns an iterable of lines.
273
+
4.**Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint via an `httpx.AsyncClient`.
274
+
5.**Handle Streaming**: If `stream` is `True`, returns an async generator that yields SSE lines from the upstream response.
266
275
6.**Error Handling**: Catches exceptions and returns an error message.
0 commit comments