Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upsize EDM fabric channel buffer slots to be able to fit 4 bfp8 tiles per packet #18000

Merged
merged 2 commits into from
Feb 19, 2025

Conversation

SeanNijjar
Copy link
Contributor

@SeanNijjar SeanNijjar commented Feb 19, 2025

Ticket

#17423

Problem description

The current default EDM buffer slot size is 4096 which can only store 3 bfp8 tiles. There is enough space in erisc L1 unreserved space such that all channels can have a power of 2 buffer slot count and also have a slot size of 4 bfp8 tiles. There is inefficient space for 5 bfp8 tiles per slot.

What's changed

This PR bumps up the buffer slot size to fit 4 bfp8 tiles per packet, which is preferable for workloads with bfp8 tiles sent over fabric.

Checklist

@SeanNijjar SeanNijjar marked this pull request as ready for review February 19, 2025 15:10
@SeanNijjar SeanNijjar merged commit 22fd7c5 into main Feb 19, 2025
12 checks passed
@SeanNijjar SeanNijjar deleted the snijjar/edm-fabric-buffer-upsize branch February 19, 2025 21:40
hschoi4448 pushed a commit that referenced this pull request Feb 20, 2025
… per packet (#18000)

The current default EDM buffer slot size is 4096 which can only store 3
bfp8 tiles. There is enough space in erisc L1 unreserved space such that
all channels can have a power of 2 buffer slot count and also have a
slot size of 4 bfp8 tiles. There is inefficient space for 5 bfp8 tiles
per slot.

This commit bumps up the buffer slot size to fit 4 bfp8 tiles per packet,
which is preferable for workloads with bfp8 tiles sent over fabric.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants