Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama provider #7

Merged
merged 4 commits into from
Aug 28, 2024
Merged

ollama provider #7

merged 4 commits into from
Aug 28, 2024

Conversation

michaelneale
Copy link
Collaborator

In theory ollama can work, for example:

ollama:
  provider: ollama
  processor: llama3.1
  accelerator: llama3.1
  moderator: passive
  toolkits:
  - name: developer
    requires: {}

however better with the larger models, if you have a beefy enough machine to run it locally.



#
# NOTE: this is experimental, best used with 70B model or larger if you can
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since experimental, we should flag it as such and raise a warning when instantiated with something like "This is an experimental provider and support may be dropped in the future" or something like that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep please take a look again

"""

def __init__(self, client: httpx.Client) -> None:
print('PLEASE NOTE: this is an experimental provider, use with care')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe just mention here that OllamaProvider is the provider referenced (since this happens under the hood in goose, and might not be obvious)

@michaelneale michaelneale merged commit a629b3d into main Aug 28, 2024
1 check passed
@michaelneale michaelneale deleted the ollama-provider branch August 28, 2024 01:59
lukealvoeiro pushed a commit that referenced this pull request Sep 2, 2024
lukealvoeiro added a commit that referenced this pull request Sep 2, 2024
* main:
  fix typos found by PyCharm (#21)
  added retry when sending httpx request to LLM provider apis (#20)
  chore: version bump to `0.8.2` (#19)
  fix: don't always use ollama provider (#18)
  fix: export `metadata.plugins` export should have a valid value (#17)
  Create an entry-point for `ai-exchange` (#16)
  chore: Run tests for python  >=3.10 (#14)
  Update pypi_release.yaml (#13)
  ollama provider (#7)
  chore: gitignore generated lockfile (#8)
codefromthecrypt pushed a commit to codefromthecrypt/exchange that referenced this pull request Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants