Projekt OpenWebUI_De
The goal is a simple installation and starting the OpenWebUi in German.
A script for the installation and subsequent registration and settings for German speech output
The documentation is still in progress. Documentation
-
Linux (tested with Ubuntu 24 LTS Server)
-
The RAM should be at least 16 GB Ram for the standard installation of llama3.1 (use via CPU, adaptation for AMD GPU or NVIDIA GPC also possible, see Ollama)
-
at least 40 GB storage space hard drive/SSD
-
root rights
The password for the root is only required once
wget https://raw.githubusercontent.com/baefthde/OpenWebUI_De/refs/heads/main/install.sh && chmod +x install.sh && ./install.sh
Execute download and set rights and execute install.sh
The install.sh:
- installed git
- get the current OpenWebUI_De directory via git
- sets the rights and runs OpenWebUI_De.sh
git clone https://github.com/baefthde/OpenWebUI_De
cd /OpenWebUI_De
chmod +x OpenWebUI_De.sh
./OpenWebUI_De.sh
installed:
- Docker via Snap
- ollama as API interface (only internally on 127.0.0.1:11434)
- llama3.1 as LLM (Large Language Model) language model from Meta AI
- OpenWebUI in Docker (accessible externally via port: 8080)
- OpenWenUI database at ~/open-webui/webui.db accessible for backup and restore outside of Docker
- watchtower in Docker so that OpenWebUI updates itself automatically every hour
- openedai-speech API interface (externally accessible port: 8000)
- with German voice output via the voice of Thorsten-Voice
Open OpenWebUI at http://IP:8080
First registration gets admin rights for configuration (name, email, password), no email will be sent!
- Settings -> General:
- Set Language to "German Deutsch".
- Settings -> Audio -> TTS Settings
- Autoplay reply on
- Admin Panel -> Settings -> Connections
- OpenAI API should be disabled
- Ollama API is set to http://localhost:11434 via Docker start
- Admin Panel -> Settings -> Audio
- TTS settings:
- Text-to-speech-engine: http://IP:8000/v1 OpenAI: sk-111111111
- TTS voice: thorsten TTS-model: tts-1
- TTS settings:
The AI chat with voice output can then be tested.
Additional LLMs can also be installed both directly via ollama and via the OpenWebUI web interface:
The selection of available LLM models can be found here: https://ollama.com/library
e.g. ollama pull mistral
- The voice files must be of the type language.onnx and language.onnx.json and must be placed in the folder ~/openedai-speech/voices.
- The configuration file ~/openedai-speech/config/voice_to_speaker.yaml needs to be adjusted, more detailed information can be found at: [https://github.com/matatonic/openedai-speech](https://github.com/matatonic/openedai -speech)
https://github.com/ollama/ollama
https://github.com/open-webui/open-webui
https://github.com/matatonic/openedai-speech
https://www.thorsten-voice.de/
https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/thorsten/high