-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
print reasoning with deepseek model #30067
Comments
It is already mentioned here #29513 as an example Use the messages = [
{"role": "system", "content": "You are an assistant."},
{"role": "user", "content": "Explain the reasoning behind this decision."}
]
response = llm.invoke(messages)
print(response.content)
print(response.additional_kwargs.get("reasoning", "")) This setup will allow you to capture both the content and reasoning from DeepSeek's responses. |
Thanks @YassinNouh21! but it does not work for me with ChatOpenAI class. This is the code I'm using `
|
the same code works fine for me: `To determine whether 9.11 or 9.8 is greater, follow these steps:
Final Answer: content='To determine whether 9.11 or 9.8 is greater, follow these steps:\n\n1. Align the decimal places: \n Write both numbers with the same number of decimal places: \n - \( 9.11 \) remains \( 9.11 \). \n - \( 9.8 \) becomes \( 9.80 \) (adding a trailing zero for clarity).\n\n2. Compare digit by digit (left to right): \n - Units place: Both have \( 9 \), so they are equal here. \n - Tenths place: Compare \( 1 \) (from \( 9.11 \)) vs. \( 8 \) (from \( 9.80 \)). Since \( 8 > 1 \), \( 9.80 \) is already larger at this stage. \n - Note: The hundredths place (\( 1 \) vs. \( 0 \)) is irrelevant once the tenths place determines the result.\n\n3. Conclusion: \n \( 9.8 \) (or \( 9.80 \)) is greater than \( 9.11 \).\n\nFinal Answer: \n\( 9.8 \) is greater than \( 9.11 \). The key factor is the tenths place: \( 8 > 1 \), which outweighs any smaller digits in subsequent places.' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 1067, 'prompt_tokens': 30, 'total_tokens': 1097, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek/deepseek-r1', 'system_fingerprint': 'fastcoe', 'finish_reason': 'stop', 'logprobs': None} id='run-d10b4520-d61b-4460-8f54-360c1fb92799-0' usage_metadata={'input_tokens': 30, 'output_tokens': 1067, 'total_tokens': 1097, 'input_token_details': {}, 'output_token_details': {}}` |
Thanks @andrasfe ! Yes, the code works, but I'm not able to access to the reasoning that should be something like this:
|
@laurafbec , you are right. I validated that the reasoning tokens are correctly returned by the openai module in |
Thanks a lot @andrasfe ! |
@andrasfe I think the problem is currently Langchain doesn't save the reasoning content when using Deeseek. Here's a capture output when I put out the LLMResult object inside the chat_models.py generations=[[ChatGeneration(text='The translation of "I love programming." into German is:\n\n**"Ich liebe das Programmieren."**\n\nThis translation uses the nominalized verb "Programmieren" with the article "das" to accurately reflect the gerund form in English. Alternatively, if using the infinitive construction, it can also be:\n\n**"Ich liebe es, zu programmieren."**\n\nBoth are correct, but the first option is more direct for the given sentence.', generation_info={'finish_reason': 'stop', 'logprobs': None}, message=AIMessage(content='The translation of "I love programming." into German is:\n\n**"Ich liebe das Programmieren."**\n\nThis translation uses the nominalized verb "Programmieren" with the article "das" to accurately reflect the gerund form in English. Alternatively, if using the infinitive construction, it can also be:\n\n**"Ich liebe es, zu programmieren."**\n\nBoth are correct, but the first option is more direct for the given sentence.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 1229, 'prompt_tokens': 20, 'total_tokens': 1249, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 1134, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}, 'prompt_cache_hit_tokens': 0, 'prompt_cache_miss_tokens': 20}, 'model_name': 'deepseek-reasoner', 'system_fingerprint': 'fp_5417b77867_prod0225', 'finish_reason': 'stop', 'logprobs': None}, id='run-69543cf6-8c13-4f78-8a36-d87024b90f74-0', usage_metadata={'input_tokens': 20, 'output_tokens': 1229, 'total_tokens': 1249, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 1134}}))]] llm_output={'token_usage': {'completion_tokens': 1229, 'prompt_tokens': 20, 'total_tokens': 1249, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 1134, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}, 'prompt_cache_hit_tokens': 0, 'prompt_cache_miss_tokens': 20}, 'model_name': 'deepseek-reasoner', 'system_fingerprint': 'fp_5417b77867_prod0225'} run=None type='LLMResult' Noticing there is no reasoning content. Here's how Dee-seek api use Openai to get the reasoning content messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=messages
)
reasoning_content = response.choices[0].message.reasoning_content
content = response.choices[0].message.content I guess some API has to be update Hope this helps you a bit. |
@laurafbec I have tried to solve the issue. Please check if it can solve your case. Have a nice day. |
What is the motivation to use ChatOpenAI instead of ChatDeepSeek? ChatDeepSeek extends BaseChatOpenAI, its intent is to support Deepseek. ChatOpenAI is intended to support the OpenAI API. It's not tenable to modify this class to support all usage of the |
@JasonHonKL , I got the fix working but have not yer initiated a PR. It's a very simple enhancement (4 lines of code) to ChatBaseOpenAI to add the reasoning key to response.additional_kwargs if not already there.This is in line with @YassinNouh21 's suggestion. But I see how adding this to the community module is a better choice. |
Glad it works with u can u mark it as solved |
Thanks @andrasfe and @YassinNouh21 !! shoudl I update langchain to test it? |
@laurafbec I guess they won't update it cause they intend you to use the Deepseek chat. You can check my pull request #30186 actually just modify a few lines of code see if it is useful for you. |
@laurafbec can you use Openrouter with ChatDeepSeek? You can set the base_url in api_base. |
This works for me, indeed. The key is not to use base_url.
|
@andrasfe Same it works for me. |
But I'm not able to print the reasoning...are you?
|
@laurafbec I think you fill in wrong kwargs. |
Checked other resources
Example Code
The following code does not return DeepSeek reasoning:
`from langchain_core.prompts import ChatPromptTemplate
from dotenv import load_dotenv
load_dotenv()
import os
from langchain_deepseek import ChatDeepSeek
from langchain_openai import ChatOpenAI
#os.environ["DEEPSEEK_API_KEY"] = os.getenv("DEEPSEEK_API_KEY")
llm = ChatDeepSeek(
model="deepseek-chat",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
llm = ChatOpenAI(
#model = 'deepseek/deepseek-chat:free',
#model = 'deepseek/deepseek-chat',
model = 'deepseek/deepseek-r1:nitro',
#model='deepseek/deepseek-r1-distill-qwen-32b',
#model='deepseek/deepseek-r1-distill-llama-8b',
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPENROUTER_API_KEY"),
model_kwargs={"extra_body": {"include_reasoning": True}}
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
print(ai_msg.content)
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
result = chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
print(result.content)
print(result)`
Error Message and Stack Trace (if applicable)
I'm not able to print the reasoning content of deepseek model. Find attached the code I'm using.
Thanks in advance!
Description
`from langchain_core.prompts import ChatPromptTemplate
from dotenv import load_dotenv
load_dotenv()
import os
from langchain_deepseek import ChatDeepSeek
from langchain_openai import ChatOpenAI
#os.environ["DEEPSEEK_API_KEY"] = os.getenv("DEEPSEEK_API_KEY")
llm = ChatDeepSeek(
model="deepseek-chat",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
llm = ChatOpenAI(
#model = 'deepseek/deepseek-chat:free',
#model = 'deepseek/deepseek-chat',
model = 'deepseek/deepseek-r1:nitro',
#model='deepseek/deepseek-r1-distill-qwen-32b',
#model='deepseek/deepseek-r1-distill-llama-8b',
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPENROUTER_API_KEY"),
model_kwargs={"extra_body": {"include_reasoning": True}}
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
print(ai_msg.content)
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
result = chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
print(result.content)
print(result)`
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
The text was updated successfully, but these errors were encountered: