Replies: 6 comments 8 replies
-
We're facing the same issue, any feedback here would be appreciated. Running vLLM online is currently not possible for us... |
Beta Was this translation helpful? Give feedback.
-
same issue, i'm offline 99% of the time and VLLM is not working offline :(. |
Beta Was this translation helpful? Give feedback.
-
Using absolute path to load the model, rather than the name on huggingface, works for me. For example,
|
Beta Was this translation helpful? Give feedback.
-
It works but the generated responses are wrong most of the time. |
Beta Was this translation helpful? Give feedback.
-
Setting the hugging face transformers module to offline mode (via a global environment variable) worked for me: |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm trying to run my DeepSeek distilled model (which I saved the weights in /home/users/ntu/hong0259/.cache/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-14B) offline, but it doesn't work. I ran: But met with this error, how can I fix it?
|
Beta Was this translation helpful? Give feedback.
-
I cloned the model repository on Hugging Face to my local machine and used the
--download-dir
parameter to specify the directory. However, when running VLLM, it still tries to connect to Hugging Face, which doesn't work without an internet connection. Even after settingexport HF_HUB_OFFLINE=1
, offline mode doesn't seem to be working. Is it possible to run VLLM offline and if so, how can I achieve this?Beta Was this translation helpful? Give feedback.
All reactions