[worker, training_utils] fix: Engine Metric Aggregation#4778
[worker, training_utils] fix: Engine Metric Aggregation#4778wuxibin89 merged 10 commits intoverl-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces a MetricList class to provide more flexible aggregation of metrics, which is a good improvement. The changes correctly adapt the ppo_loss function to use this new abstraction.
However, I've found a few critical issues that will cause runtime errors:
- In
verl/utils/metric/utils.py, there's a bug in type checking usingisinstancewith atyping.Union, which will raise aTypeError. - In
verl/workers/engine_workers.py, the logic for aggregating metrics from distributed processes doesn't correctly handle the newMetricValueobjects, which will also lead to aTypeError.
I've also included some high-severity suggestions to improve code quality and maintainability, such as avoiding wildcard imports and correcting inconsistent type hints. Please review the detailed comments for suggestions on how to fix these issues.
verl/utils/metric/__init__.py
Outdated
| # limitations under the License. | ||
|
|
||
| from .utils import reduce_metrics | ||
| from .utils import * |
There was a problem hiding this comment.
Using wildcard imports (import *) is generally discouraged as it can pollute the namespace and make it unclear which names are being imported. It's better to explicitly import the names you need from the utils module. This improves code readability and maintainability.
| from .utils import * | |
| from .utils import AggregationType, MetricList, MetricValue, reduce_metrics |
verl/utils/metric/utils.py
Outdated
| raise ValueError(f"Unsupported aggregation type: {aggregation}") | ||
| self.values = values if values is not None else [] | ||
|
|
||
| def append(self, value: float | MetricValue) -> None: |
There was a problem hiding this comment.
The type hint value: float | MetricValue is inconsistent with the method's implementation, which also handles torch.Tensor and int values. To improve type safety and clarity, the hint should be expanded to include all supported types. Using the Numeric alias, which is Union[int, float, torch.Tensor], would be appropriate here.
| def append(self, value: float | MetricValue) -> None: | |
| def append(self, value: Union[Numeric, MetricValue]) -> None: |
|
/gemini |
|
Hello @JacobHelwig! It looks like you've invoked me, but you need to specify a command after For example, you can use Here's the full list of available commands for your reference: Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a flexible metric aggregation system using a MetricList class, which is a valuable improvement for handling different metric scaling behaviors. The implementation is mostly solid, but I've identified two significant issues that could lead to runtime errors. One is a critical IndexError from accessing an empty list, and the other is a TypeError due to improper handling of mixed list and MetricList types for the same metric. My review includes specific code suggestions to resolve these bugs.
verl/utils/py_functional.py
Outdated
| if isinstance(val, (list, MetricList)): | ||
| data[new_key].extend(val) | ||
| else: | ||
| data[new_key].append(val) |
There was a problem hiding this comment.
There's a potential TypeError here. If data[new_key] is a standard Python list (which can happen if an empty list [] was appended for this key previously) and val is a MetricList, the call data[new_key].extend(val) will fail because MetricList is not iterable by default for list.extend. You should handle this case by 'upgrading' the list to a MetricList before extending to ensure type consistency.
if isinstance(data[new_key], list) and isinstance(val, (MetricValue, MetricList)):
# Upgrade list to MetricList if it's not already one
new_list = val.init_list()
new_list.extend(data[new_key])
data[new_key] = new_list
if isinstance(val, (list, MetricList)):
data[new_key].extend(val)
else:
data[new_key].append(val)|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a Metric class to allow for flexible aggregation of metrics, which is a solid approach to address the incorrect scaling issue. The implementation is largely well-executed. However, I've identified a few critical issues that could lead to runtime crashes. These include potential IndexError and AttributeError exceptions from improper handling of empty lists and incorrect method invocation, respectively. Furthermore, the Metric.aggregate method is susceptible to a ValueError when dealing with empty metrics using MIN or MAX aggregation. My specific comments provide detailed suggestions to resolve these critical problems.
verl/utils/py_functional.py
Outdated
| new_key = f"{prefix}{key}" if not key.startswith(prefix) else key | ||
| if new_key not in data: | ||
| data[new_key] = [] | ||
| data[new_key] = Metric.init_list(val) if isinstance(val, Metric) else [] |
There was a problem hiding this comment.
The init_list method is an instance method of the Metric class, not a class method. Calling Metric.init_list(val) will raise an AttributeError because the Metric class itself does not have this method. You should call it on the Metric instance val, as in val.init_list().
| data[new_key] = Metric.init_list(val) if isinstance(val, Metric) else [] | |
| data[new_key] = val.init_list() if isinstance(val, Metric) else [] |
There was a problem hiding this comment.
@JacobHelwig Please address gemini's review, init_list is not a classmethod.
|
@JacobHelwig Please format code according to: https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a Metric class to provide flexible aggregation strategies for different metrics, addressing an issue where metrics were being incorrectly scaled. The implementation is mostly solid and includes comprehensive tests. However, I've found a critical issue that could lead to a crash due to an IndexError when handling an empty list of metrics. I've also identified a couple of high-severity bugs related to the new Metric class logic that could lead to incorrect behavior or unexpected errors. Please address these points to ensure the stability and correctness of the new metric management system.
| if isinstance(val, list): | ||
| output[key] = list(chain.from_iterable(val)) | ||
| output[key] = ( | ||
| Metric.chain(val) if isinstance(val[0], Metric) else list(chain.from_iterable(val)) |
There was a problem hiding this comment.
The expression isinstance(val[0], Metric) will raise an IndexError if val is an empty list. This will crash the worker. You should first check if the list is not empty before accessing its first element.
| Metric.chain(val) if isinstance(val[0], Metric) else list(chain.from_iterable(val)) | |
| Metric.chain(val) if val and isinstance(val[0], Metric) else list(chain.from_iterable(val)) |
| if value is not None: | ||
| self.append(value) |
There was a problem hiding this comment.
The __init__ method does not correctly handle initialization with a list of values. The type hint for value is Optional[Numeric | list[Numeric]], but if a list is passed, self.append(value) is called. This will raise a ValueError because append is designed for single numeric values or Metric objects, not lists. You should check if value is a list and call self.extend(value) in that case.
| if value is not None: | |
| self.append(value) | |
| if value is not None: | |
| if isinstance(value, list): | |
| self.extend(value) | |
| else: | |
| self.append(value) |
| if len(metric_lists) == 0: | ||
| return cls(aggregation=AggregationType.MEAN) |
There was a problem hiding this comment.
When chain is called with an empty list of metrics, it returns a new Metric with AggregationType.MEAN by default. This arbitrary default can lead to incorrect behavior. For example, if a list of metrics intended for summation is empty, chaining them should result in a metric that aggregates to 0 (the sum of an empty set), not NaN (the mean of an empty set). It would be safer to raise a ValueError if the list is empty, forcing the caller to handle this case explicitly.
| if len(metric_lists) == 0: | |
| return cls(aggregation=AggregationType.MEAN) | |
| if not metric_lists: | |
| raise ValueError("Cannot chain an empty list of metrics.") |
| Numeric = int | float | torch.Tensor | ||
|
|
||
|
|
||
| class Metric: |
There was a problem hiding this comment.
@JacobHelwig Docstring Coverage ci failed, please add doc string for Metric.
* [recipe] feat: migrate `recipe` to the dedicated repo `verl-recipe` as a submodule (#4795)
### What does this PR do?
This PR
1. migrates most recipes from the `recipe` directory to the dedicated
repo [`verl-recipe`](https://github.com/verl-project/verl-recipe),
2. adds `verl-recipe` as a submodule,
3. adds instruction to update the submodule reference in the PR
template,
4. migrates [`transfer_queue`](verl/experimental/transfer_queue),
[`fully_async_policy`](verl/experimental/fully_async_policy),
[`one_step_off_policy`](verl/experimental/one_step_off_policy) and
[`vla`](verl/experimental/vla) to
[`verl/experimental`](verl/experimental) since they are planned to be
merged into the main library,
5. updates related CI and paths accordingly,
6. updates the README news and awesome projects about this migration,
7. forces into a single commit and tries its best to recognize `rename`
to keep the commit history trackable.
See the "conjugate" PR at
https://github.com/verl-project/verl-recipe/pull/7.
### Test
See the CI.
### Todo
- [ ] Ignore the final PR commit in git blame if it shows up too
frequently.
* [model] fix: fix temp dtype (#4813)
### What does this PR do?
- As title. Prevent temperature to be int.
### Checklist Before Starting
- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* [vllm, sglang, rollout] fix: Fix a mistake when running run_qwen3_vl-30b-megatron.sh with latest verl and vllm0.12 (#4810)
* [ckpt] feat: add checkpoint-engine abstraction (#4775)
### What does this PR do?
Add Checkpoint Engine abstraction.
#### Overview
Checkpoint Engine is an unified abstract layer to synchronize weights
between various training backends and inference backends. It provides
three unified APIs:
- send_weights: get named tensors from generator and send them in
streaming manner.
- receive_weights: return a tensor generator that yield named tensors in
streaming manner.
- get_weights: return a tensor generator that yield named tensors in
streaming manner, used for each inference instance update weight
independently from local cache (e.g share memory, disk).
For more detail, see `verl/checkpoint_engine/README.md`.
#### verl core
<img width="640" height="167" alt="image"
src="https://github.com/user-attachments/assets/fbd125d7-b461-4c89-9678-b95a2ef89c33"
/>
#### checkpoint engine
<img width="1004" height="409" alt="checkpoint-engine"
src="https://github.com/user-attachments/assets/fc263c1f-17b2-4579-9842-87b24e12abc7"
/>
* [doc, ci] fix: Update Ascend doc and fix e2e_ascend CI (#4816)
### What does this PR do?
> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.
As title.
### Checklist Before Starting
- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
Not related.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
Not related.
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
Not related.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
* [trainer] feat: VeOmniEngine supports qwen3_vl ulysses (#4806)
### What does this PR do?
as title.
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
* [doc] chore: fix checkpoint engine image link (#4821)
### What does this PR do?
As title
* [hardware] fix: automatically set device for SFT case (#4828)
### What does this PR do?
auto_set_device does not cover SFT case.
> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
* [data] feat: TransferQueue - Update TransferQueue version and docs (#4829)
### What does this PR do?
- Update TQ to formal release version.
- Fix the shallow copy bug for chunking `BatchMeta`
https://gitcode.com/Ascend/TransferQueue/pull/2
- Fix race condition for modifying torch num_threads
https://gitcode.com/Ascend/TransferQueue/pull/5
- More robust port binding
https://gitcode.com/Ascend/TransferQueue/pull/3
- Optimize memory usage for zero-copy transfer
https://github.com/TransferQueue/TransferQueue/pull/163
- add check_data_production_status and check_consumption_status and
support polling get metadata
https://github.com/TransferQueue/TransferQueue/pull/157 @NINGBENZHE
- (alpha) Support Mooncake Store backend
https://github.com/TransferQueue/TransferQueue/pull/162 @zhaohaidao
- (alpha) Support Ray RDT backend
https://github.com/TransferQueue/TransferQueue/pull/167
- Update docs.
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
---------
Signed-off-by: 0oshowero0 <o0shower0o@outlook.com>
* [doc] Update docs about fully_async_policy (#4826)
### What does this PR do?
Update documentation about fully_async_policy and adjust the formatting
of the table.
---------
Co-authored-by: jsfanfanfan <2981866535@qq.com>
* [ckpt] fix: FSDP save ckpt after validation (#4799)
### What does this PR do?
This PR fixes a bug in the `save_checkpoint` function for FSDPEngine.
In the original logic, if the model engine is used
(`use_legacy_worker_impl=disable`), the `wake_up` function in
`verl/workers/engine_workers.py` will be invoked during the rollout
phase of each step, which will offload the model to CPU.
Under normal circumstances, the `compute_log_prob` function called
during the training phase can load the model back to GPU. However, the
training process is not executed during the validation phase, leaving
the model on the CPU. If a checkpoint is saved immediately after
validation, it will trigger the following error: `AssertionError:
Expects tensor to be on the compute device cuda:0, was on cpu.`
<details>
<summary>Details</summary>
Script:
```
set -x
python examples/data_preprocess/geo3k.py --local_dir ~/data/geo3k
python -m verl.trainer.main_ppo \
algorithm.adv_estimator=grpo \
data.train_files=$HOME/data/geo3k/train.parquet \
data.val_files=$HOME/data/geo3k/test.parquet \
data.train_batch_size=512 \
data.max_prompt_length=1024 \
data.max_response_length=2048 \
data.filter_overlong_prompts=True \
data.truncation='error' \
data.image_key=images \
actor_rollout_ref.model.path=Qwen/Qwen2.5-VL-3B-Instruct \
actor_rollout_ref.actor.optim.lr=1e-6 \
actor_rollout_ref.model.use_remove_padding=True \
actor_rollout_ref.actor.ppo_mini_batch_size=128 \
actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 \
actor_rollout_ref.actor.use_kl_loss=True \
actor_rollout_ref.actor.kl_loss_coef=0.01 \
actor_rollout_ref.actor.kl_loss_type=low_var_kl \
actor_rollout_ref.actor.entropy_coeff=0 \
actor_rollout_ref.model.enable_gradient_checkpointing=True \
actor_rollout_ref.actor.fsdp_config.param_offload=False \
actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=16 \
actor_rollout_ref.rollout.tensor_model_parallel_size=2 \
actor_rollout_ref.rollout.name=vllm \
actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
actor_rollout_ref.rollout.enable_chunked_prefill=False \
actor_rollout_ref.rollout.enforce_eager=False \
actor_rollout_ref.rollout.free_cache_engine=False \
actor_rollout_ref.rollout.n=5 \
actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=16 \
actor_rollout_ref.ref.fsdp_config.param_offload=False \
algorithm.use_kl_in_reward=False \
trainer.use_legacy_worker_impl=disable \
trainer.critic_warmup=0 \
trainer.logger=['console','wandb'] \
trainer.project_name='verl_ci_grpo_example_geo3k' \
trainer.experiment_name='qwen2_5_vl_3b_function_rm' \
trainer.n_gpus_per_node=8 \
trainer.nnodes=1 \
trainer.log_val_generations=20 \
trainer.save_freq=5 \
trainer.test_freq=5 \
trainer.total_epochs=15
```
Error:
```
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) ERROR:2026-01-05
07:35:49,128:Got error when executing task.
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) Traceback (most
recent call last):
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1890, in ray._raylet.execute_task
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1998, in ray._raylet.execute_task
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1897, in ray._raylet.execute_task
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1825, in
ray._raylet.execute_task.function_executor
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 4651, in
ray._raylet.CoreWorker.run_async_func_or_coro_in_event_loop
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
self.__get_result()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in
__get_result
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) raise
self._exception
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 4638, in async_func
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/ray/_private/async_compat.py",
line 50, in wrapper
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/ray/_private/function_manager.py",
line 691, in actor_method_executor
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
method(__ray_actor, *args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/ray/util/tracing/tracing_helper.py",
line 463, in _resume_span
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
method(self, *_args, **_kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/single_controller/ray/base.py", line 841, in
func
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
getattr(self.worker_dict[key], name)(*args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/single_controller/base/decorator.py", line
456, in inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/utils/transferqueue_utils.py", line 314, in
dummy_inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) output =
func(*args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/workers/engine_workers.py", line 541, in
save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
self.actor.save_checkpoint(local_path, hdfs_path, global_step,
max_ckpt_to_keep)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/single_controller/base/decorator.py", line
456, in inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/utils/transferqueue_utils.py", line 314, in
dummy_inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) output =
func(*args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/workers/engine_workers.py", line 343, in
save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
self.engine.save_checkpoint(local_path, hdfs_path, global_step,
max_ckpt_to_keep)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/workers/engine/fsdp/transformer_impl.py",
line 607, in save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
self.checkpoint_manager.save_checkpoint(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/utils/checkpoint/fsdp_checkpoint_manager.py",
line 238, in save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) model_state_dict =
self.model.state_dict()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py",
line 2256, in state_dict
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) hook(self, prefix,
keep_vars)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py",
line 120, in decorate_context
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 777, in _pre_state_dict_hook
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
_pre_state_dict_hook_fn[fsdp_state._state_dict_type](
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 517, in _sharded_pre_state_dict_hook
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
_common_unshard_pre_state_dict_hook(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 161, in _common_unshard_pre_state_dict_hook
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
_enter_unshard_params_ctx(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 125, in _enter_unshard_params_ctx
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
fsdp_state._unshard_params_ctx[module].__enter__()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/lib/python3.12/contextlib.py", line 137, in __enter__
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
next(self.gen)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) ^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_unshard_param_utils.py",
line 199, in _unshard_fsdp_state_params
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) _unshard(state,
handle, computation_stream, computation_stream)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_runtime_utils.py",
line 290, in _unshard
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) ran_pre_unshard =
handle.pre_unshard()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_flat_param.py",
line 1303, in pre_unshard
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
self._check_on_compute_device(self.flat_param)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_flat_param.py",
line 2582, in _check_on_compute_device
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) _p_assert(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/utils.py",
line 159, in _p_assert
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) raise
AssertionError(s)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) AssertionError:
Expects tensor to be on the compute device cuda:0, was on cpu
```
</details>
To fix this bug, this PR checks whether the model is located on the CPU
before saving the checkpoint and loads it onto the GPU if that is the
case. The same bug also exists in Megatron, which requires further
fixes.
---------
Co-authored-by: weidongliang.339 <weidongliang.339@bytedance.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* [perf] feat: Add MFU for Qwen3-VL dense (#4753)
### What does this PR do?
Add the _estimate_qwen3_vit_flop and _estimate_qwen3_vl_flops function
to calculate the FLOPs of Qwen3-VL dense models. Update the test cases
to verify the calculation accuracy of Qwen3-VL models.
### Checklist Before Starting
- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
The following is the output result of running the test file.
<img width="1271" height="152" alt="image"
src="https://github.com/user-attachments/assets/2a3d426c-bd32-4369-9c07-c8a17c60e98b"
/>
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
* [tool] fix: avoid nested ToolResponse in SandboxFusionTool (#4833)
### What does this PR do?
Fix an incorrect double-wrapping of `ToolResponse` in
`SandboxFusionTool.execute()`.
- `execute_code()` already returns a `ToolResponse`, but `execute()`
previously wrapped it again as `ToolResponse(text=result)`.
- Since `ToolResponse.text` expects `str | None`, the old behavior could
produce an invalid/nested response (or confusing stringified output).
- This PR makes `execute()` return the `ToolResponse` directly when
appropriate, and only wraps once when the worker returns a
non-`ToolResponse` result.
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here:
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
- pre-commit:
- `pre-commit install`
- `pre-commit run --all-files --show-diff-on-failure --color=always`
- Result: **Passed**
(ruff/format/mypy/autogen-trainer-cfg/docstring/license/compileall)
### API and Usage Example
No API changes. `SandboxFusionTool.execute()` still returns
`tuple[ToolResponse, float, dict]`.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
- `verl/tools/sandbox_fusion_tools.py`
- If the execution worker returns a `ToolResponse`, return it directly.
- Otherwise, convert the result to `str` (or `None`) and wrap once as
`ToolResponse(text=...)`.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the Contribute Guide:
https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md
- [x] Apply pre-commit checks: `pre-commit install && pre-commit run
--all-files --show-diff-on-failure --color=always`
- [ ] Add / Update the documentation:
https://github.com/volcengine/verl/tree/main/docs
- Not needed for this small bug fix.
- [ ] Add unit or end-to-end test(s) to the CI workflow:
https://github.com/volcengine/verl/tree/main/.github/workflows
- Not added. This change is a small correctness fix and is covered by
existing type/validation expectations; pre-commit checks passed.
- [ ] Once your PR is ready for CI, send a message in the `ci-request`
channel:
- https://verl-project.slack.com/archives/C091TCESWB1
- If not accessible:
https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a
- [ ] Recipe submodule update (if applicable).
- Not applicable for this PR.
Co-authored-by: winston <email@example.com>
* [vllm] fix: fix error in vllm patch for diff vllm version and add ci for moe with fp8 rollout (#4824)
### What does this PR do?
fix error in vllm patch for diff vllm version and add ci for moe with
fp8 rollout
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
---------
Co-authored-by: Xue Huang <xueh@nvidia.com>
* [algo] feat: add optimal token baseline and variance proxy (#4678)
# Optimal Token Baseline
## Main feature
- Register `AdvantageEstimator.OPTIMAL_TOKEN_BASELINE`.
- Extend the DP actor to emit `sum_pi_squared`, expose
`calculate_sum_pi_squared` and checkpointing toggles across configs, and
add a reusable `calculate_sum_pi_squared_from_logits` function.
- Introduce `compute_variance_proxy_metrics` to surface signal/total
power/noise diagnostics during training.
- Document the method in `docs/algo/otb.md` and ship an executable
example at `examples/otb_trainer/run_qwen2_5-7b.sh`.
## Usage
- Enable OTB by overriding config keys (OmegaConf overlay):
```yaml
algorithm.adv_estimator: optimal_token_baseline
actor_rollout_ref:
actor:
calculate_sum_pi_squared: true
sum_pi_squared_checkpointing: false # optional for long contexts
rollout:
n: 8
```
- Run the example script (adjust dataset paths & WandB project as
needed):
```bash
bash examples/otb_trainer/run_qwen2_5-7b.sh
```
- Monitor the new variance proxies in trainer logs:
`variance_proxy/proxy1_signal_strength`, `proxy2_total_power`,
`proxy3_pure_noise`.
## keyNote
- `actor.calculate_sum_pi_squared` requires
`actor_rollout_ref.model.use_fused_kernels=False`; fused kernels must
surface logits before OTB can run there.
- Group sampling is mandatory (`rollout.n > 1`); with single-rollout
batches OTB collapses to vanilla returns.
---
UPDATE(@tongyx361 ): `compute_sum_pi_squared` is changed to
`calculate_sum_pi_squared` for consistency with `calculate_entropy`.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Shawn/Yuxuan Tong <tongyuxuan361@gmail.com>
* [megatron] fix: Fix error in megatron workers (#4832)
### What does this PR do?
There is a bug in megatron_workers.py, 745 line is redundant and
introduces a bug. It overwrites the estimated_flops and promised_flops
calculated on lines 742-744.
Also, the condition "vl" in func.__name__ is brittle as it relies on a
naming convention. This could lead to silent miscalculations of MFU if a
new vision-language model's estimation function is named differently. A
more robust approach is to attempt calling the function with the extra
arguments and handle the TypeError if it doesn't support them.
### Checklist Before Starting
- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
* [misc] feat: delete unnecessary base class in agent loop worker and vLLMHttpServer (#4838)
* [misc] feat: consolidate tensordict before dispatch (#4830)
* [training_utils] fix: json encode error in filelogger (#4811)
### What does this PR do?
- fix: json encode error in filelogger
error message: "TypeError: Object of type int32 is not JSON
serializable"
- ensure it's not Tensor object when logging to metrics
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [x] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
Signed-off-by: zhuangqh <zhuangqhc@gmail.com>
* [ckpt] chore: skip saving hf_checkpoint during megatron+lora training & add a separate lora merge script (#4839)
### What does this PR do?
When using LoRA, MegatronCheckpointManager.save_checkpoint not only
saves the adapter but also saves the huggingface checkpoint, which is
unnecessary. This PR skips saving the huggingface checkpoint, and
provides a separate script for merging the adapter.
Relating to #4063
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here:
https://github.com/volcengine/verl/pulls?q=is%3Apr+megatron+lora+save
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```bash
python ./scripts/megatron_merge_lora.py \
--config-name='ppo_megatron_trainer' \
actor_rollout_ref.model.lora.adapter_path=$APAPTER_PATH \
... # same config as your training script
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
* [rollout, vllm] fix: accuracy issue in verl serve mode + vllm-ascend + dp + ep + tp scenarios (#4783)
### What does this PR do?
Fix the accuracy issue in verl + vllm-ascend dp+ep+tp+server scenarios,
issue:https://github.com/vllm-project/vllm-ascend/issues/5544
### Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
Tested GRPO on local NPU host:
<img width="1047" height="117"
alt="58274edd-d0d3-454c-8e39-3188f6f19e71"
src="https://github.com/user-attachments/assets/dee7bf2f-6faf-4f44-a8b3-64670d5b1e10"
/>
### Design & Code Changes
Root cause analysis: currently, the version of Verl + Ascend NPU +
vllm-ascend is
[v0.11.0](https://verl.readthedocs.io/en/latest/ascend_tutorial/ascend_quick_start.html).
In the vllm-ascend v0.11.0 code, the all2all backend
(flashinfer_all2allv) is selected and updated to the vllm worker
environment. However, verl's ExternalZeroMQDistributedExecutor does not
pass this environment to the vllm worker processes like vllm's
[RayDistributedExecutor](https://github.com/vllm-project/vllm/blob/0d4044edd85de30d7d4558aeea4d1e95c7c556d6/vllm/v1/executor/ray_executor.py#L337)
backend does. Therefore, due to the all2all backend for vllm-ascend is
wrong, and then there is a precision issue on vllm-ascend.
Implementation:
1. In vLLMAsyncRollout, when initiating vllm work, if it's an NPU
scenario, add the environment variables required by vllm-ascend.
2. Add vllm engine environment variables setting in rollout.yaml,
supports setting by user.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
Co-authored-by: FightingZhen
---------
Signed-off-by: leo-pony <nengjunma@outlook.com>
* [fsdp] feat: add validate process on trainer node when use_trainer_do_validate=True (#4683)
### What does this PR do?
> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.
User Trainer node to do validate process when run mode on fully-async,
It can save time for validate computing and reduce perf/time_of_step
peak
- add new use_trainer_do_validate on fully_async async_training config
to decide whether using trainer node to do validate process
- use_trainer_do_validate: default is false
- It can improve performance of validate, such as in
`dapo_7b_math_fsdp2_8_8.sh`, it can improve about 1X speed
<img width="1440" height="608" alt="image"
src="https://github.com/user-attachments/assets/436e481e-4f51-4e8e-ad08-b038b3f0e89d"
/>
<img width="1030" height="762" alt="image"
src="https://github.com/user-attachments/assets/ed8e3237-d37d-4eff-b944-fb81ea63f87c"
/>
- optimized the `process_validation_metrics()` on `_validate()` process,
when input datasets len=1444, it latency reduce from 150+s to 40+s
<img width="2630" height="448" alt="image"
src="https://github.com/user-attachments/assets/b6fb50bc-5856-49c1-91dc-f845e9c410b4"
/>
<img width="2504" height="518" alt="image"
src="https://github.com/user-attachments/assets/b3b5f238-0c5e-4c63-9683-83f34d5a46fd"
/>
### Checklist Before Starting
- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
- `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
- Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`
### Test
> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.
- on test scripts such as `dapo_7b_math_fsdp2_8_8.sh` add
`async_training.use_trainer_do_validate=True` command to do compute
- the result of this function on Qwen2.5-Math-7B model
- the baseline scripts is `dapo_7b_math_fsdp2_8_8.sh`
- the optimized scripts is `dapo_7b_math_fsdp2_8_8.sh`
+`async_training.use_trainer_do_validate=True`
- the acc and perfomance is below:
<img width="1650" height="702" alt="image"
src="https://github.com/user-attachments/assets/3419d7bb-a64c-4fe9-b776-3312925f51ab"
/>
<img width="1580" height="522" alt="image"
src="https://github.com/user-attachments/assets/2c3a7e24-7421-4f12-8527-7b997f9c3b89"
/>
- green: optimized case (`async_training.use_trainer_do_validate=True` )
- gray: baseline case (`async_training.use_trainer_do_validate=False` )
### API and Usage Example
> Demonstrate how the API changes if any, and provide usage example(s)
if possible.
```python
# Add code snippet or script demonstrating how to use this
async_training.use_trainer_do_validate=True \
```
### Design & Code Changes
> Demonstrate the high-level design if this PR is complex, and list the
specific changes.
### Checklist Before Submitting
> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.
- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
---------
Co-authored-by: Shangwei-Li <lishangwei@mail.ustc.edu.cn>
* [misc] fix: recipe submodule accidentally been removed (#4843)
### What does this PR do?
As title.
* [worker, training_utils] fix: Engine Metric Aggregation (#4778)
### What does this PR do?
Because some metrics are scaled by global_bsz/global_tokens in
`workers.utils.losses.ppo_loss`, the mean in `reduce_metrics` adds an
extra scaling of the metric by the number of gradient accumulation steps
(see examples in Test sec):
https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/utils/metric/utils.py#L53
Aggregation of the `loss` metric handles this by taking sum:
https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/workers/engine_workers.py#L143-L144
Depending on how metrics are handled in `workers.utils.losses.ppo_loss`,
it may not be correct to aggregate all of them using sum (as in #4785).
For example, `actor/pg_loss` and `actor/kl_loss` are scaled by global
batch sizes/ token counts, and should be aggregated using sum, while the
`pg_metrics` `pg_clipfrac`, `ppo_kl`, and `pg_clipfrac_lower` are scaled
by local token counts and should be aggregated using mean.
This PR introduces a metric management class to allow flexibility in
deciding the aggregation type on a per-metric basis.
### Test
This test demonstrates the scaling of metrics with the number of
gradient accumulation steps, as well as how this is resolved on this
branch. The command for running is below.
<img width="980" height="638" alt="image"
src="https://github.com/user-attachments/assets/e65ab291-3125-4df4-a0e0-3473bf64cb2a"
/>
```bash
gsm8k_train_path=$DATA_DIR/gsm8k/train.parquet
gsm8k_test_path=$DATA_DIR/gsm8k/test.parquet
train_files="['$gsm8k_train_path']"
test_files="['$gsm8k_test_path']"
ppo_micro_batch_size_per_gpu=2
ppo_micro_batch_size_per_gpu=8
branch=main
branch=fixEngineMetrics
python3 -m verl.trainer.main_ppo \
algorithm.adv_estimator=grpo \
data.dataloader_num_workers=0 \
data.return_full_prompt=True \
data.train_files="$train_files" \
data.val_files="$test_files" \
data.train_batch_size=8 \
data.max_prompt_length=512 \
data.max_response_length=1024 \
data.filter_overlong_prompts=True \
data.truncation='error' \
actor_rollout_ref.model.path=Qwen/Qwen3-0.6B \
actor_rollout_ref.actor.optim.lr=1e-6 \
actor_rollout_ref.model.use_remove_padding=True \
actor_rollout_ref.actor.ppo_mini_batch_size=8 \
actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=$ppo_micro_batch_size_per_gpu \
actor_rollout_ref.actor.use_kl_loss=True \
actor_rollout_ref.actor.fsdp_config.param_offload=True \
actor_rollout_ref.actor.fsdp_config.optimizer_offload=True \
actor_rollout_ref.rollout.tensor_model_parallel_size=1 \
actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=8 \
actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=8 \
actor_rollout_ref.rollout.name=vllm \
actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
actor_rollout_ref.rollout.n=5 \
trainer.logger='["console","wandb"]' \
trainer.project_name='fixEngineMetrics' \
trainer.experiment_name="$branch/ppo_micro_batch_size_per_gpu$ppo_micro_batch_size_per_gpu" \
trainer.n_gpus_per_node=2 \
trainer.nnodes=1 \
trainer.save_freq=400 \
trainer.test_freq=40 \
trainer.use_legacy_worker_impl=disable \
trainer.total_epochs=2 \
trainer.total_training_steps=10 \
trainer.resume_mode=disable \
actor_rollout_ref.actor.use_torch_compile=False \
actor_rollout_ref.actor.fsdp_config.use_torch_compile=False \
trainer.val_before_train=False \
actor_rollout_ref.rollout.enforce_eager=True \
actor_rollout_ref.ref.fsdp_config.use_torch_compile=False
```
### Design & Code Changes
Adds a `Metric` class which tracks metric values and aggregation type.
* [rollout] fix: configurable agent loop + multimodal data for fully-async (#4842)
## Description
* **`verl/experimental/fully_async_policy/agent_loop/agent_loop.py`**
* Use `config.agent.default_agent_loop` as the default `agent_name` when
`agent_name` is not present in `batch.non_tensor_batch`.
* Pass `dataset_cls=self.dataset_cls` and
`dataset_config=self.config.data` into `hydra.utils.instantiate(...)`
when creating an agent loop instance.
*
**`verl/experimental/fully_async_policy/agent_loop/partial_tool_agent_loop.py`**
* Extract `video_data` from `multi_modal_data` and include `video_data`
in the created `AgentData` instance (in addition to existing
`image_data`).
* **`verl/experimental/fully_async_policy/detach_utils.py`**
* Stop popping original batch fields in
`prepare_single_generation_data`.
* Set `agent_name` to `async_partial_tool_agent` or
`partial_single_turn_agent` depending on
`config.actor_rollout_ref.rollout.multi_turn.enable`.
## Testing
* Verified the fully async training entry can run successfully on 4 GPU
server setup (multi-turn enabled, partial rollout enabled, vLLM async
mode).
## Related
* Fixes and extends the scope of:
[4834](https://github.com/volcengine/verl/issues/4834)
* [ci] test: switch the vlm rl test case in the npu environment to use the model engine (#4844)
* [ckpt] fix: Megatron save ckpt after validation (#4841)
### What does this PR do?
This PR fixes a bug in the `save_checkpoint` function for
MegatronEngine. https://github.com/volcengine/verl/pull/4799 is a
similar PR, which modifies FSDPEngine.
In the original logic, if the model engine is used
(`use_legacy_worker_impl=disable`), the `wake_up` function in
`verl/workers/engine_workers.py` will be invoked during the rollout
phase of each step, which will offload the model to CPU.
Under normal circumstances, the `compute_log_prob` function called
during the training phase can load the model back to GPU. However, the
training process is not executed during the validation phase, leaving
the model on the CPU. If a checkpoint is saved immediately after
validation, it will trigger the following error: `AssertionError:
Expects tensor to be on the compute device cuda:0, was on cpu.`
To fix this bug, this PR checks whether the model is located on the CPU
before saving the checkpoint and loads it onto the GPU if that is the
case.
---------
Co-authored-by: weidongliang.339 <weidongliang.339@bytedance.com>
* [megatron] feat: Share actor and ref in LoRA (#4673)
For `compute_ref_log_prob`, we can do that by disabling lora layers
temporarily for the forward pass, as base weight are frozen and only
lora layers are trained.
This has already been supported in FSDP LoRA.
### What does this PR do?
> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference …
…#4778) ### What does this PR do? Because some metrics are scaled by global_bsz/global_tokens in `workers.utils.losses.ppo_loss`, the mean in `reduce_metrics` adds an extra scaling of the metric by the number of gradient accumulation steps (see examples in Test sec): https://github.com/volcengine/verl/blob/a0e0839b86aa954006acca637b4bef1e2b3090e4/verl/utils/metric/utils.py#L53 Aggregation of the `loss` metric handles this by taking sum: https://github.com/volcengine/verl/blob/a0e0839b86aa954006acca637b4bef1e2b3090e4/verl/workers/engine_workers.py#L143-L144 Depending on how metrics are handled in `workers.utils.losses.ppo_loss`, it may not be correct to aggregate all of them using sum (as in verl-project#4785). For example, `actor/pg_loss` and `actor/kl_loss` are scaled by global batch sizes/ token counts, and should be aggregated using sum, while the `pg_metrics` `pg_clipfrac`, `ppo_kl`, and `pg_clipfrac_lower` are scaled by local token counts and should be aggregated using mean. This PR introduces a metric management class to allow flexibility in deciding the aggregation type on a per-metric basis. ### Test This test demonstrates the scaling of metrics with the number of gradient accumulation steps, as well as how this is resolved on this branch. The command for running is below. <img width="980" height="638" alt="image" src="https://github.com/user-attachments/assets/e65ab291-3125-4df4-a0e0-3473bf64cb2a" /> ```bash gsm8k_train_path=$DATA_DIR/gsm8k/train.parquet gsm8k_test_path=$DATA_DIR/gsm8k/test.parquet train_files="['$gsm8k_train_path']" test_files="['$gsm8k_test_path']" ppo_micro_batch_size_per_gpu=2 ppo_micro_batch_size_per_gpu=8 branch=main branch=fixEngineMetrics python3 -m verl.trainer.main_ppo \ algorithm.adv_estimator=grpo \ data.dataloader_num_workers=0 \ data.return_full_prompt=True \ data.train_files="$train_files" \ data.val_files="$test_files" \ data.train_batch_size=8 \ data.max_prompt_length=512 \ data.max_response_length=1024 \ data.filter_overlong_prompts=True \ data.truncation='error' \ actor_rollout_ref.model.path=Qwen/Qwen3-0.6B \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.model.use_remove_padding=True \ actor_rollout_ref.actor.ppo_mini_batch_size=8 \ actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=$ppo_micro_batch_size_per_gpu \ actor_rollout_ref.actor.use_kl_loss=True \ actor_rollout_ref.actor.fsdp_config.param_offload=True \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=True \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=8 \ actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=8 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \ actor_rollout_ref.rollout.n=5 \ trainer.logger='["console","wandb"]' \ trainer.project_name='fixEngineMetrics' \ trainer.experiment_name="$branch/ppo_micro_batch_size_per_gpu$ppo_micro_batch_size_per_gpu" \ trainer.n_gpus_per_node=2 \ trainer.nnodes=1 \ trainer.save_freq=400 \ trainer.test_freq=40 \ trainer.use_legacy_worker_impl=disable \ trainer.total_epochs=2 \ trainer.total_training_steps=10 \ trainer.resume_mode=disable \ actor_rollout_ref.actor.use_torch_compile=False \ actor_rollout_ref.actor.fsdp_config.use_torch_compile=False \ trainer.val_before_train=False \ actor_rollout_ref.rollout.enforce_eager=True \ actor_rollout_ref.ref.fsdp_config.use_torch_compile=False ``` ### Design & Code Changes Adds a `Metric` class which tracks metric values and aggregation type.
…#4778) ### What does this PR do? Because some metrics are scaled by global_bsz/global_tokens in `workers.utils.losses.ppo_loss`, the mean in `reduce_metrics` adds an extra scaling of the metric by the number of gradient accumulation steps (see examples in Test sec): https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/utils/metric/utils.py#L53 Aggregation of the `loss` metric handles this by taking sum: https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/workers/engine_workers.py#L143-L144 Depending on how metrics are handled in `workers.utils.losses.ppo_loss`, it may not be correct to aggregate all of them using sum (as in verl-project#4785). For example, `actor/pg_loss` and `actor/kl_loss` are scaled by global batch sizes/ token counts, and should be aggregated using sum, while the `pg_metrics` `pg_clipfrac`, `ppo_kl`, and `pg_clipfrac_lower` are scaled by local token counts and should be aggregated using mean. This PR introduces a metric management class to allow flexibility in deciding the aggregation type on a per-metric basis. ### Test This test demonstrates the scaling of metrics with the number of gradient accumulation steps, as well as how this is resolved on this branch. The command for running is below. <img width="980" height="638" alt="image" src="https://github.com/user-attachments/assets/e65ab291-3125-4df4-a0e0-3473bf64cb2a" /> ```bash gsm8k_train_path=$DATA_DIR/gsm8k/train.parquet gsm8k_test_path=$DATA_DIR/gsm8k/test.parquet train_files="['$gsm8k_train_path']" test_files="['$gsm8k_test_path']" ppo_micro_batch_size_per_gpu=2 ppo_micro_batch_size_per_gpu=8 branch=main branch=fixEngineMetrics python3 -m verl.trainer.main_ppo \ algorithm.adv_estimator=grpo \ data.dataloader_num_workers=0 \ data.return_full_prompt=True \ data.train_files="$train_files" \ data.val_files="$test_files" \ data.train_batch_size=8 \ data.max_prompt_length=512 \ data.max_response_length=1024 \ data.filter_overlong_prompts=True \ data.truncation='error' \ actor_rollout_ref.model.path=Qwen/Qwen3-0.6B \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.model.use_remove_padding=True \ actor_rollout_ref.actor.ppo_mini_batch_size=8 \ actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=$ppo_micro_batch_size_per_gpu \ actor_rollout_ref.actor.use_kl_loss=True \ actor_rollout_ref.actor.fsdp_config.param_offload=True \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=True \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=8 \ actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=8 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \ actor_rollout_ref.rollout.n=5 \ trainer.logger='["console","wandb"]' \ trainer.project_name='fixEngineMetrics' \ trainer.experiment_name="$branch/ppo_micro_batch_size_per_gpu$ppo_micro_batch_size_per_gpu" \ trainer.n_gpus_per_node=2 \ trainer.nnodes=1 \ trainer.save_freq=400 \ trainer.test_freq=40 \ trainer.use_legacy_worker_impl=disable \ trainer.total_epochs=2 \ trainer.total_training_steps=10 \ trainer.resume_mode=disable \ actor_rollout_ref.actor.use_torch_compile=False \ actor_rollout_ref.actor.fsdp_config.use_torch_compile=False \ trainer.val_before_train=False \ actor_rollout_ref.rollout.enforce_eager=True \ actor_rollout_ref.ref.fsdp_config.use_torch_compile=False ``` ### Design & Code Changes Adds a `Metric` class which tracks metric values and aggregation type.
|
Hi @JacobHelwig, thanks for catching the metrics issue! I think the fix needs one more adjustment. While the summation over micro-batches is correct, the reduction across data-parallel ranks should use mean (not sum) in Recommended approach:
This ensures the final metric correctly represents the global per-token average without being inflated by |
### What does this PR do? Fixes issue (#4778 (comment)) with metric aggregation across DP ranks resulting in sum-aggregated metrics scaling in magnitude with number of DP ranks Thanks to @pengwu22 for pointing this out.
What does this PR do?
Because some metrics are scaled by global_bsz/global_tokens in
workers.utils.losses.ppo_loss, the mean inreduce_metricsadds an extra scaling of the metric by the number of gradient accumulation steps (see examples in Test sec):https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/utils/metric/utils.py#L53
Aggregation of the
lossmetric handles this by taking sum: https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/workers/engine_workers.py#L143-L144Depending on how metrics are handled in
workers.utils.losses.ppo_loss, it may not be correct to aggregate all of them using sum (as in #4785). For example,actor/pg_lossandactor/kl_lossare scaled by global batch sizes/ token counts, and should be aggregated using sum, while thepg_metricspg_clipfrac,ppo_kl, andpg_clipfrac_lowerare scaled by local token counts and should be aggregated using mean.This PR introduces a metric management class to allow flexibility in deciding the aggregation type on a per-metric basis.
Test
This test demonstrates the scaling of metrics with the number of gradient accumulation steps, as well as how this is resolved on this branch. The command for running is below.
Design & Code Changes
Adds a
Metricclass which tracks metric values and aggregation type.