-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model loading problem with Ollama #218
Comments
Same exact issue here. pre-existing Ollama install. |
Hello, Inside the frontend it is used in several places :
It can be fixed by updating the frontend Settings.tsx. a PR has been created for the settings |
Nice, thanks for the fix @alexandregodard . Hope your fix finds its way to the app soon, so i can update the tool and try again :) |
Hello, I have made the same changes on my side, but I still can't see the name of the ollama model after running it. |
而且我改了前端代码后运行 python -m openui 好像还不生效 |
有任何使用问题记得找我,兄弟们~ |
in server.py file in backend, refactor @router.get("/v1/models", tags="openui/models") like below from ollama._types import SubscriptableBaseModel, ModelDetails
from typing import Optional, Sequence
from pydantic import (
ByteSize,
ConfigDict,
)
# class ListResponse_new(SubscriptableBaseModel):
class Model_new(SubscriptableBaseModel):
model: Optional[str] = None
name: Optional[str] = None
modified_at: Optional[datetime] = None
digest: Optional[str] = None
size: Optional[ByteSize] = None
details: Optional[ModelDetails] = None
# models: Sequence[Model]
# 'List of models.'
@router.get("/v1/models", tags="openui/models")
async def models():
tasks = [
get_openai_models(),
get_groq_models(),
get_ollama_models(),
get_litellm_models(),
]
openai_models, groq_models, ollama_models, litellm_models = await asyncio.gather(
*tasks
)
# final_ollama = []
# Lấy ra dictionary dữ liệu
# data = ollama_models.model_dump() # hoặc dùng model_dump() nếu dùng pydantic v2
# Tạo instance mới từ dictionary đó
# new_instance = ListResponse_new(**data)
final_ollama = []
for i in ollama_models:
# print(type(i))
# # print(i)
# print(i.model_extra)
data = i.model_dump()
new_class = Model_new(**data)
new_class.name = new_class.model
final_ollama.append(new_class)
# i.model_config = ConfigDict(extra='allow')
# setattr(i, "name", getattr(i, "model"))
# i.name = i.model
return {
"models": {
"openai": openai_models,
"groq": groq_models,
"ollama": final_ollama,
"litellm": litellm_models,
}
} |
Checked this refactor. It does not work. <class 'ollama._types.ListResponse.Model'> |
Hi there,
I am using openui out of pinokio (https://pinokio.computer/item?uri=https://github.com/pinokiofactory/openui).
As a LLM backend, i'm using Ollama in its current version 0.4.6.
If i want to try your tool, i get the error message "Error! 404 Error code: 404 - {'error': {'message': 'model "undefined" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}" after sending a prompt.
As i try to set a different model, i noticed in the settings window, that the select box does not show any model names but only empty entries:

It does not matter, which of those entries i choose, the error persists.
If i quit Ollama and try to resolve the installed models, the seleciton is empty:

So the model resolution from Ollama seems to work at least partially (6 entries are correct, according to the 6 currently installed models).
My guess is, that openui is not able to resolve the information from the Ollama model list request correctly and further on, this leads to the upper error message.
Do you have any ideas, to solve this problem?
Thx :)
The text was updated successfully, but these errors were encountered: