Replies: 1 comment
-
|
Thanks for the feature request! I want to make sure I understand what you're looking for. LlamaBarn is primarily a model manager that handles downloading and running local models. It doesn't have its own chat interface; the web UI you see when a model is running is actually the one built into llama-server (from llama.cpp). Could you clarify what you'd like LlamaBarn to do for your existing server? For example:
Knowing more about your workflow would help us understand if this fits within LlamaBarn's scope as a model manager. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Since a long while I am running llama-server on this system, serving gpt-oss 120B using SSL and API-keys. The web frontend is good for using llama-server from other systems within the home LAN. Locally on this system I would like to have a frontend with more features and functions for accessing llama-server. LlamaBarm might be that frontend I am looking for, it seems it can not use an other llama-server instance running independently from itself.
So my feature request would be to make LlamaBarn configurable in such a way that it does not depend on running its own llama-server instance.
Thanks for all the very useful open source software from the ggml cosmos
Beta Was this translation helpful? Give feedback.
All reactions