Skip to content

[append attention] clean code#7062

Merged
zhoutianzi666 merged 5 commits intoPaddlePaddle:developfrom
zhoutianzi666:remove_usele
Mar 30, 2026
Merged

[append attention] clean code#7062
zhoutianzi666 merged 5 commits intoPaddlePaddle:developfrom
zhoutianzi666:remove_usele

Conversation

@zhoutianzi666
Copy link
Copy Markdown
Collaborator

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


“liuruian” seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Mar 28, 2026

Thanks for your contribution!

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Mar 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (develop@842c608). Learn more about missing BASE report.

Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #7062   +/-   ##
==========================================
  Coverage           ?   73.19%           
==========================================
  Files              ?      401           
  Lines              ?    56573           
  Branches           ?     8942           
==========================================
  Hits               ?    41410           
  Misses             ?    12219           
  Partials           ?     2944           
Flag Coverage Δ
GPU 73.19% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@zhoutianzi666 zhoutianzi666 changed the title cpmmot clean code Mar 29, 2026
@zhoutianzi666 zhoutianzi666 changed the title clean code [APPEND ATTENTION] clean code Mar 29, 2026
@zhoutianzi666 zhoutianzi666 changed the title [APPEND ATTENTION] clean code [append attention] clean code Mar 29, 2026
chang-wenbin
chang-wenbin previously approved these changes Mar 30, 2026
smem_t k_smem(smem + num_frags_x * 16 * HEAD_DIM * sizeof(T)),
v_smem(smem + (num_frags_x + NUM_WARP_KV * num_frags_z) * 16 * HEAD_DIM *
sizeof(T));
static_assert(num_rows_per_block == num_frags_x * 16);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_rows_per_block应该等于NUM_WARP_Q * num_frags_x * 16(tensor core的一个mma m维),这里因为原本NUM_WARP_Q等于1做了省略,assert的话可以加上

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NUM_WARP_Q == 1的assert 在函数开头加上了哈

tid % 16, tid / 16); // 16 * 16

const uint32_t q_end =
min(q_len, div_up((tile_id + 1) * num_rows_per_block, GROUP_SIZE));
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里游泳一些边界case测试下offset确实不会超过div_up((tile_id + 1) * num_rows_per_block, GROUP_SIZE)吗

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里游泳一些边界case测试下offset确实不会超过div_up((tile_id + 1) * num_rows_per_block, GROUP_SIZE)吗

这里因为每个CTA 最多只读 num_rows_per_block 个Q head_dim,所以只需要检查不超过q_len即可

@zhoutianzi666 zhoutianzi666 merged commit 76cf5e9 into PaddlePaddle:develop Mar 30, 2026
34 of 38 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants