-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Labels
enhancementNew feature or requestNew feature or request
Milestone
Description
Problem or use case
I don't run ollama on my local weak laptop, but I do have a beefy GPU server which is running ollama server, llama.cpp server and vllm server.
It would be great if selfhosted OpenAPI compliant API could be used for the post-processing.
Proposed solution
Expand the options in the configuration page (both cli and desktop app) to allow for either ollama or OpenAi compliant API access to a server IP and port of my choosing.
Alternatives considered
no
Additional context
no
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request