-
Notifications
You must be signed in to change notification settings - Fork 790
Description
Environment
- ollama: 0.11.4
- ollama-python: 0.5.3
- Python: 3.10
- OS: Windows 10
Background
I downloaded a GGUF file locally and created a model with a custom Modelfile
:
FROM ./qwen3-embedding-0.6b-q8_0.gguf
After running:
ollama create Qwen3-Embedding:0.6B -f Modelfile
the CLI confirms the model is correctly registered:
ollama show Qwen3-Embedding:0.6B
Model
architecture qwen3
parameters 595.78M
context length 32768
embedding length 1024
quantization Q8_0
Capabilities
embedding
The REST endpoint /api/tags
returns the expected JSON, including the name
field:
{
"models": [
{
"name": "Qwen3-Embedding:0.6B",
"model": "Qwen3-Embedding:0.6B",
"modified_at": "2025-08-13T21:02:21.918215+08:00",
"size": 639150858,
"digest": "43f15..........0112",
"details": {
"parent_model": "",
"format": "gguf",
"family": "qwen3",
"families": ["qwen3"],
"parameter_size": "595.78M",
"quantization_level": "Q8_0"
}
}
]
}
Problem
When I use the official ollama-python
client:
from ollama import Client
client = Client(host="http://localhost:11434")
models = client.list()["models"]
print(models)
the returned objects do not contain the name
attribute, only:
[Model(
model='Qwen3-Embedding:0.6B',
modified_at=datetime.datetime(...),
digest='43f15..........0112',
size=639150858,
details=ModelDetails(...)
)]
This breaks downstream libraries (e.g. mem0) that rely on:
# site-packages\mem0\embeddings\ollama.py
local_models = self.client.list()["models"]
if not any(model.get("model") == self.config.model for model in local_models):
self.client.pull(self.config.model)
Because model.get("model")
is None
, the condition always evaluates to True
, leading to unnecessary pulls or runtime errors.
Question
Is this discrepancy intentional, or am I missing a configuration step when importing a local GGUF?
Could the ollama-python
SDK expose the same name
field that the REST API already provides?
Thank you for your time and for maintaining this great project!