-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ollama provider #7
Conversation
src/exchange/providers/ollama.py
Outdated
|
||
|
||
# | ||
# NOTE: this is experimental, best used with 70B model or larger if you can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since experimental, we should flag it as such and raise a warning when instantiated with something like "This is an experimental provider and support may be dropped in the future" or something like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep please take a look again
src/exchange/providers/ollama.py
Outdated
""" | ||
|
||
def __init__(self, client: httpx.Client) -> None: | ||
print('PLEASE NOTE: this is an experimental provider, use with care') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe just mention here that OllamaProvider is the provider referenced (since this happens under the hood in goose, and might not be obvious)
* main: fix typos found by PyCharm (#21) added retry when sending httpx request to LLM provider apis (#20) chore: version bump to `0.8.2` (#19) fix: don't always use ollama provider (#18) fix: export `metadata.plugins` export should have a valid value (#17) Create an entry-point for `ai-exchange` (#16) chore: Run tests for python >=3.10 (#14) Update pypi_release.yaml (#13) ollama provider (#7) chore: gitignore generated lockfile (#8)
In theory ollama can work, for example:
however better with the larger models, if you have a beefy enough machine to run it locally.