Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Granite 3.2 Thinking #30122

Open
4 tasks done
lemassykoi opened this issue Mar 5, 2025 · 8 comments · May be fixed by #30191
Open
4 tasks done

Granite 3.2 Thinking #30122

lemassykoi opened this issue Mar 5, 2025 · 8 comments · May be fixed by #30191

Comments

@lemassykoi
Copy link

lemassykoi commented Mar 5, 2025

Checked other resources

  • This is a bug, not a usage question. For questions, please use GitHub Discussions.
  • I added a clear and detailed title that summarizes the issue.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.

Example Code

from langgraph.prebuilt import create_react_agent

def build_ollama_llm(model:str = OLLAMA_CHAT_MODEL) -> ChatOllama:
    return ChatOllama(
        base_url    = OLLAMA_BASE_URL,
        model       = model,
        temperature = OLLAMA_TEMPERATURE,
        seed        = OLLAMA_SEED,
        format      = "",
        verbose     = OLLAMA_CHAT_FUNCTION_VERBOSE,
        disable_streaming = True,
        keep_alive  = 0
    )

def build_react_agent(model):
    logger.info('Creating ReAct Agent')
    return create_react_agent(
        model        = model,
        tools        = TOOLS_LIST,
        checkpointer = memory,
        name         = "ReAct_Agent_WIP",
        debug        = REACT_AGENT_VERBOSE
    )

llm = build_ollama_llm()
agent_supervisor = build_react_agent(llm)

messages_with_think = {
            "messages": [
                {
                    "role": "control",
                    "content": "thinking"
                },
                (
                    "user",
                    question,
                )
            ]
}

if "granite3.2" in OLLAMA_CHAT_MODEL:
    messages = messages_with_think

async for event in agent_supervisor.astream(
    input = messages,
    config = config,
    stream_mode = "values",
    debug = OPENAI_SUPERVISOR_VERBOSE
):
    logger.debug(event)

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "c:\react_WIP.py", line 945, in astream_interactive
    async for event in agent_supervisor.astream(
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 2304, in astream  
    while loop.tick(input_keys=self.input_channels):
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\loop.py", line 419, in tick
    mv_writes = apply_writes(
                ^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\algo.py", line 305, in apply_writes
    if channels[chan].update(vals) and get_next_version is not None:
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\channels\binop.py", line 88, in update
    self.value = self.operator(self.value, value)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\graph\message.py", line 36, in _add_messages
    return func(left, right, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\graph\message.py", line 173, in add_messages
    for m in convert_to_messages(right)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\messages\utils.py", line 364, in convert_to_messages
    return [_convert_to_message(m) for m in messages]
            ^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\messages\utils.py", line 337, in _convert_to_message
    _message = _create_message_from_message_type(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\messages\utils.py", line 289, in _create_message_from_message_type
    raise ValueError(msg)
ValueError: Unexpected message type: 'control'. Use one of 'human', 'user', 'ai', 'assistant', 'function', 'tool', 'system', or 'developer'.
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE

Description

I'm trying to use create_react_agent with Granite3.2 with Thinking

System Info

langchain_ollama version: 0.2.3
langchain_core version: 0.3.41
langchain_community version: 0.3.19
langchain_openai version: 0.2.12
langgraph version: 0.3.5
langsmith version: 0.3.11

@vbarda vbarda transferred this issue from langchain-ai/langgraph Mar 5, 2025
@vbarda
Copy link
Contributor

vbarda commented Mar 5, 2025

does Ollama actually support "control" message? i believe you need to use "system" message instead. or perhaps there is some other way to supply this param via a config? do you have any docs referencing this "control" message?

@lemassykoi
Copy link
Author

@hinthornw
Copy link
Collaborator

Did you try a ChatMessage with the control type

@lemassykoi
Copy link
Author

Did you try a ChatMessage with the control type

hmmm ok, you are right : I tried with ollama without langchain

import ollama
ollama_client = ollama.Client(host="http://192.168.10.59:11434")

response = ollama_client.chat(
    model=OLLAMA_CHAT_MODEL,
    messages=[
        {
            "role": "control",
            "content": "thinking",
        },
        {
            "role": "user",
            "content": query,
        }
    ],
)
print(response)

exception :

1 validation error for Message
role
  Input should be 'user', 'assistant', 'system' or 'tool' [type=literal_error, input_value='control', input_type=str]
    For further information visit https://errors.pydantic.dev/2.10/v/literal_error

I don't understand as ollama 0.5.13 support all Granite3.2 models

@lemassykoi
Copy link
Author

here is a reference message : ollama/ollama#8955 (comment)

@lemassykoi
Copy link
Author

this is working :

echo '{"model": "granite3.2:8b-instruct-q8_0",
         "messages":[
           {"role":"control","content":"thinking"},
           {"role":"user","content":"how many times does the letter `r` occur in the word `strawberry`?"}
         ],
         "stream":false}' | curl -s http://localhost:11434/api/chat -d @- | jq -r .message.content

Answer:

Here is my thought process:

The user is asking for a simple count of a specific letter in a given word. The word is "strawberry", and they want to know how many times the letter 'r' appears. This requires a straightforward character check within the word.

Here is my response:

The letter 'r' occurs twice in the word 'strawberry'.

Here's the breakdown: s-t-r-a-w-b-e-r-y. As you can see, 'r' shows up at the 3rd and 8th positions.

@lemassykoi
Copy link
Author

lemassykoi commented Mar 6, 2025

with a patch from ollama, error is now from langchain_core

see https://github.com/ollama/ollama-python/pull/462#issuecomment-2702926254

I updated traceback in first post

@rylativity
Copy link

rylativity commented Mar 6, 2025

Ollama server has no problem processing messages with the "control" role. However, both ollama-python (ollama python client) and langchain (specifcally langchain-core) raise an error if a message's 'role' field/attribute is not one of a set of accepted literals.

The PR linked by @lemassykoi resolves the error raised by ollama-python, but it looks like similar changes would need to be made in both langchain-core and langchain-ollama.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants