Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: agent provider overriding not working #1097

Open
dmatora opened this issue Sep 25, 2024 · 7 comments
Open

[Bug]: agent provider overriding not working #1097

dmatora opened this issue Sep 25, 2024 · 7 comments
Labels
bug Something isn't working

Comments

@dmatora
Copy link

dmatora commented Sep 25, 2024

Version

Command-line (Python) version

Operating System

MacOS

What happened?

  1. pointed default agent to groq
  2. tried setting importer and code monkey to ollama (tried different spelling)
  3. tried to import project

ollama didn't get a single request

@dmatora dmatora added the bug Something isn't working label Sep 25, 2024
@hqnicolas
Copy link

same here....
I have 30 servers and
this project only uses just the :

"agent": {
"default": {
"provider":

@mercury64
Copy link

Is this still a problem?
I run Ollama locally on a Mac, works as expected.

@hqnicolas
Copy link

I run Ollama locally on a Mac, works as expected.

did you use different images, from different servers?

@mercury64
Copy link

I run Ollama locally on a Mac, works as expected.

did you use different images, from different servers?

Images?

Locally served Ollama, internal network, many different models.

@hqnicolas
Copy link

hqnicolas commented Jan 15, 2025

many different models.

set a 1b model as "agent": {
"default": {
"provider":

and also set a 14b + model as code monkey
you will understend
it will be 1b for code monkey to

@mercury64
Copy link

mercury64 commented Jan 15, 2025

Yes, works as expected:
Log file:

2025-01-15 12:58:43,123 DEBUG [core.db.setup] Running database migrations for sqlite:///pythagora.db (config: ~/Programming/gpt-pilot/core/db/alembic.ini)
2025-01-15 12:58:43,130 DEBUG [core.ui.console] Starting console UI
2025-01-15 12:58:43,419 DEBUG [core.llm.base] Calling ollama model hf.co/LoupGarou/deepseek-coder-6.7b-instruct-pythagora-v3-gguf:Q6_K (temp=0.5), prompt length: 0.1 KB
2025-01-15 12:58:43,419 DEBUG [core.llm.ollama_client] ** MESSAGES SENT **
2025-01-15 12:58:43,419 DEBUG [core.llm.ollama_client] [{'role': 'user', 'content': "This is a connection test. If you can see this, please respond only with 'START' and nothing else."}]
2025-01-15 12:58:43,429 DEBUG [core.llm.base] Calling ollama model llama3.3 (temp=0.5), prompt length: 0.1 KB
2025-01-15 12:58:43,429 DEBUG [core.llm.ollama_client] ** MESSAGES SENT **
2025-01-15 12:58:43,429 DEBUG [core.llm.ollama_client] [{'role': 'user', 'content': "This is a connection test. If you can see this, please respond only with 'START' and nothing else."}]
2025-01-15 12:58:43,436 DEBUG [core.llm.base] Calling ollama model codellama (temp=0.5), prompt length: 0.1 KB
2025-01-15 12:58:43,436 DEBUG [core.llm.ollama_client] ** MESSAGES SENT **
2025-01-15 12:58:43,436 DEBUG [core.llm.ollama_client] [{'role': 'user', 'content': "This is a connection test. If you can see this, please respond only with 'START' and nothing else."}]
2025-01-15 12:58:43,735 DEBUG [core.llm.ollama_client] *** OLLAMA RESPONSE ***
2025-01-15 12:58:43,735 DEBUG [core.llm.ollama_client] START
2025-01-15 12:58:43,735 DEBUG [core.llm.base] Total ollama response time 0.30s, 43 prompt tokens, 3 completion tokens used
2025-01-15 12:58:43,735 INFO [core.cli.main] API check for ollama codellama succeeded.
2025-01-15 12:58:54,434 DEBUG [core.llm.ollama_client] *** OLLAMA RESPONSE ***
2025-01-15 12:58:54,434 DEBUG [core.llm.ollama_client] START
2025-01-15 12:58:54,434 DEBUG [core.llm.base] Total ollama response time 11.02s, 36 prompt tokens, 2 completion tokens used
2025-01-15 12:58:54,434 INFO [core.cli.main] API check for ollama hf.co/LoupGarou/deepseek-coder-6.7b-instruct-pythagora-v3-gguf:Q6_K succeeded.
2025-01-15 12:59:09,853 DEBUG [core.llm.ollama_client] *** OLLAMA RESPONSE ***
2025-01-15 12:59:09,854 DEBUG [core.llm.ollama_client] START
2025-01-15 12:59:09,854 DEBUG [core.llm.base] Total ollama response time 26.42s, 33 prompt tokens, 2 completion tokens used
2025-01-15 12:59:09,854 INFO [core.cli.main] API check for ollama llama3.3 succeeded.

Default and Importer and CodeMonkey model overides, sent 3 START messages, and received 3 back.
API check for each model succeeded.

@mercury64
Copy link

TO override agents in config, you must use the class name of the agent that is to be overridden.

class CodeMonkey(BaseAgent):
    agent_type = "code-monkey"
    display_name = "Code Monkey"

...
class Importer(BaseAgent):
    agent_type = "importer"
    display_name = "Project Analyist"

Config:

  "agent": {
    "default": {
      "provider": "ollama",
      "model": "hf.co/LoupGarou/deepseek-coder-6.7b-instruct-pythagora-v3-gguf:Q6_K",
      "temperature": 0.5
    },
    "CodeMonkey": {
      "provider": "ollama",
      "model": "llama3.3",
      "temperature": 0.5
    },
    "Importer": {
      "provider": "ollama",
      "model": "codellama",
      "temperature": 0.5
    }
  },

As that is how the BaseAgent loads the config, from what I can tell in the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants