How are you running AnythingLLM?
Docker (local)
What happened?
The LM Studio provider doesn't present a reasoning models thoughts when they are outputted. Models such as deepseek-r1 and qwen3 output reasoning content.
Are there known steps to reproduce?
-
Set up LM Studio as provider and choose a reasoning model that outputs it's reasoning.
-
Send a basic prompt (not "@agent")
-
Notice thoughts are not being presented as it would with another provider such as Ollama.
LLM Provider & Model (if applicable)
No response
Embedder Provider & Model (if applicable)
No response
How are you running AnythingLLM?
Docker (local)
What happened?
The LM Studio provider doesn't present a reasoning models thoughts when they are outputted. Models such as
deepseek-r1andqwen3output reasoning content.Are there known steps to reproduce?
Set up LM Studio as provider and choose a reasoning model that outputs it's reasoning.
Send a basic prompt (not "@agent")
Notice thoughts are not being presented as it would with another provider such as Ollama.
LLM Provider & Model (if applicable)
No response
Embedder Provider & Model (if applicable)
No response