Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ install:
. openr1/bin/activate && uv pip install --upgrade pip && \
uv pip install vllm==0.8.5.post1 && \
uv pip install setuptools && \
uv pip install flash-attn --no-build-isolation && \
uv pip install flash-attn==2.7.4.post1 --no-build-isolation && \
GIT_LFS_SKIP_SMUDGE=1 uv pip install -e ".[dev]"

style:
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,11 @@ uv venv openr1 --python 3.11 && source openr1/bin/activate && uv pip install --u
> [!TIP]
> For Hugging Face cluster users, add `export UV_LINK_MODE=copy` to your `.bashrc` to suppress cache warnings from `uv`

Next, install vLLM and FlashAttention:
Next, install vLLM and FlashAttention (use Flash Attention `v2.7.4.post1` to avoid ABI mismatches):

```shell
uv pip install vllm==0.8.5.post1
uv pip install setuptools && uv pip install flash-attn --no-build-isolation
uv pip install setuptools && uv pip install flash-attn==2.7.4.post1 --no-build-isolation
```

This will also install PyTorch `v2.6.0` and it is **very important** to use this version since the vLLM binaries are compiled for it. You can then install the remaining dependencies for your specific use case via `pip install -e .[LIST OF MODES]`. For most contributors, we recommend:
Expand Down