Skip to content

selfhosted ollama server access #17

@Construkted-Reality

Description

@Construkted-Reality

Problem or use case
I don't run ollama on my local weak laptop, but I do have a beefy GPU server which is running ollama server, llama.cpp server and vllm server.

It would be great if selfhosted OpenAPI compliant API could be used for the post-processing.

Proposed solution
Expand the options in the configuration page (both cli and desktop app) to allow for either ollama or OpenAi compliant API access to a server IP and port of my choosing.

Alternatives considered
no

Additional context
no

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions