Skip to content

Conversation

carryyu
Copy link
Collaborator

@carryyu carryyu commented Aug 20, 2025

Supports DP+TP+EP hybrid parallel deployment strategy.

  1. Expert_parallel_size = data_parallel_size * tensor_parallel_size when enable-expert-parallel.
  2. Deprecated event_loop_ep.
  3. Save_output when tensor_parallel_rank == 0.
  4. Most distributed operations need to be restricted in tp_group.
  5. Weight splitting logic, only splitting the attention part when ep parallelization is turned on.
  6. If bias exists in RowParallelLinear, it needs to be divided by tensor_parallel_size to prevent repeated accumulation.
  7. To ensure balanced EP communication, MoE Layer will split the input into tensor_parallel_size parts, and then perform AllGather when TP and EP are both enabled.

TODO

  1. Hardware backends other than GPUs need to be adapted.
  2. Distributed operations require support for 0-dimensional tensors.
  3. AllGather will hang when data shapes are different on multi-ranks.
  4. Custom_all_reduce supports specified tp_group.
  5. Functional and performance verification in various scenarios, such as MTP, ChunkPrefill, and PromptCache.
  6. Some details may not be fully covered in this PR.

Copy link

paddle-bot bot commented Aug 20, 2025

Thanks for your contribution!

@codecov-commenter
Copy link

codecov-commenter commented Aug 25, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (develop@c43a4be). Learn more about missing BASE report.

Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #3489   +/-   ##
==========================================
  Coverage           ?   31.44%           
==========================================
  Files              ?       13           
  Lines              ?      159           
  Branches           ?       29           
==========================================
  Hits               ?       50           
  Misses             ?       96           
  Partials           ?       13           
Flag Coverage Δ
diff 31.44% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@carryyu carryyu force-pushed the hybrid-parallel-inf branch from 3cd82a5 to 1733621 Compare August 25, 2025 12:28
@carryyu carryyu force-pushed the hybrid-parallel-inf branch from 4244cc3 to e3f0be7 Compare August 25, 2025 14:41
@yuanlehome yuanlehome merged commit d339df2 into PaddlePaddle:develop Aug 26, 2025
35 of 39 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants