max_tokens
is replaced into max_completion_tokens
in the http request
#30113
Labels
🤖:bug
Related to a bug, vulnerability, unexpected error with an existing feature
Checked other resources
Example Code
from langchain_openai import ChatOpenAI
from langchain.schema import SystemMessage, HumanMessage
API_KEY = 'sk-key'
BASE_URL = 'https://api.deepseek.com'
MODEL = 'deepseek-chat'
MAX_TOKENS = 8888
chat = ChatOpenAI(
base_url=BASE_URL,
model=MODEL,
api_key=API_KEY,
max_tokens=MAX_TOKENS, # not useful
temperature=0.664
)
messages = [
SystemMessage(content="你是一个乐于助人的 AI。"),
HumanMessage(content="请写一首关于春天的诗。")
]
response = chat.invoke(messages)
print(response.content)
Error Message and Stack Trace (if applicable)
max_tokens
is replaced intomax_completion_tokens
in the http request. But real useful parameter ismax_tokens
in our models.Description
When we use
ChatOpenAI
to utilize ourselves' deployed model in inner network, I try to setmax-tokens
parameter of this class, and I find this parameter is replaced bymax_completion_tokens
. I cannot understand which reason it is. Wasmax_completion_tokens
parameter used before?But the fact is that our model use
max-tokens
to set models' the length of real input token.So, Dose these mean that
ChatOpenAI
is deprecated?System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
The text was updated successfully, but these errors were encountered: