Services:
- Ollama Local LLM Host + API
- Open-WebUi Chat Interface
- ComfyUi Stable-Diffusion Image/Video/Audio LLM Host + API
- Grafana DevOps Metrics Tool
- Watchtower for Automatic Docker Image Updates (Dev-Stack only)
- VRAM Manager for effective resource sharing between Ollama and ComfyUi on a single GPU setup
Hardware:
- Palit GeForce RTX 5080
- AMD Ryzen 7 9800X3D
- 64GB DDR5-6000
- 2TB M.2 SSD PCIe 4.0
- MSI X870E Motherboard
- 1000W Max Consumption
Development in progress until production files published in this repo.
All content has been written using various AI assistants. Selection of models, prompting, content supervision, review, testing and refactoring is done by hand.