You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are two easy ways to install Tools in Open WebUI:
31
31
32
-
### Option 1: Manual Download & Import
33
-
34
-
1. Visit the [Community Tool Library](https://openwebui.com/tools)
35
-
2. Click on a Tool you like.
36
-
3. Click the blue Get button > “Download as JSON export.”
37
-
4. Go to Workspace ➡️ Tools in Open WebUI.
38
-
5. Click “Import Tools” and upload the downloaded file.
39
-
40
-
### Option 2: One-Click Import from the Web
41
-
42
32
1. Go to [Community Tool Library](https://openwebui.com/tools)
43
-
2. Choose a Tool, then click the blue Get button.
33
+
2. Choose a Tool, then click the Get button.
44
34
3. Enter your Open WebUI instance’s IP address or URL.
45
35
4. Click “Import to WebUI” — done!
46
36
@@ -78,6 +68,62 @@ You can also let your LLM auto-select the right Tools using the AutoTool Filter:
78
68
79
69
✅ And that’s it — your LLM is now Tool-powered! You're ready to supercharge your chats with web search, image generation, voice output, and more.
80
70
71
+
72
+
---
73
+
74
+
# 🧠 Choosing How Tools Are Used: Default vs Native
75
+
76
+
Once Tools are enabled for your model, Open WebUI gives you two different ways to let your LLM use them in conversations.
77
+
78
+
You can decide how the model should call Tools by choosing between:
79
+
80
+
- 🟡 Default Mode (Prompt-based)
81
+
- 🟢 Native Mode (Built-in function calling)
82
+
83
+
Let’s break it down:
84
+
85
+
### 🟡 Default Mode (Prompt-based Tool Triggering)
86
+
87
+
This is the default setting in Open WebUI.
88
+
89
+
Here, your LLM doesn’t need to natively support function calling. Instead, we guide the model using smart prompts (ReACT-style — Reasoning + Acting) to select and use a Tool.
90
+
91
+
✅ Works with almost any model
92
+
✅ Great way to unlock Tools with basic or local models
93
+
❗ Not as reliable or flexible as Native Mode when chaining tools
94
+
95
+
### 🟢 Native Mode (Function Calling Built-In)
96
+
97
+
If your model does support “native” function calling (like GPT-4o or GPT-3.5-turbo-1106), you can use this powerful mode to let the LLM decide — in real time — when and how to call multiple Tools during a single chat message.
98
+
99
+
✅ Fast, accurate, and can chain multiple Tools in one response
100
+
✅ The most natural and advanced experience
101
+
❗ Requires a model that actually supports native function calling
102
+
103
+
### ✳️ How to Switch Between Modes
104
+
105
+
Want to enable native function calling in your chats? Here's how:
106
+
107
+
1. Open the chat window with your model.
108
+
2. Click ⚙️ Chat Controls > Advanced Params.
109
+
3. Look for the Function Calling setting and switch it from Default → Native
110
+
111
+
That’s it! Your chat is now using true native Tool support (as long as the model supports it).
112
+
113
+
➡️ We recommend using GPT-4o or another OpenAI model for the best native function-calling experience.
114
+
🔎 Some local models may claim support, but often struggle with accurate or complex Tool usage.
0 commit comments