You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have installed open webui and it runs successfully with local llama models.
However, with larger models where the responses take longer, a timeout error occurs. However, the AI result still comes if you wait even longer.
I have now read that an environment variable can be changed when open webui is normally pulled up via docker compose: AIOHTTP_CLIENT_TIMEOUT=7200
In the example this would be 2h (7200s).
I have now looked at how this is installed in the LXC container. It also seems to be installed via docker compose. I have now opened nano /opt/open-webui/.env and inserted the above variable.
Then restarted the container - but webui no longer starts.
Does anyone have an idea how I can extend the timeout in the container?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I have installed open webui and it runs successfully with local llama models.
However, with larger models where the responses take longer, a timeout error occurs. However, the AI result still comes if you wait even longer.
I have now read that an environment variable can be changed when open webui is normally pulled up via docker compose:
AIOHTTP_CLIENT_TIMEOUT=7200
In the example this would be 2h (7200s).
I have now looked at how this is installed in the LXC container. It also seems to be installed via docker compose. I have now opened
nano /opt/open-webui/.env
and inserted the above variable.Then restarted the container - but webui no longer starts.
Does anyone have an idea how I can extend the timeout in the container?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions