torch compiled flex attention #36487
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
fix flex attention implementation
I encounter terrible performance when training Llama 8b transformer with flex_attention. Got 15% mfu on a workload where I usually have 45%. Switching to flash_attention_2 fixed the issue.
It seems that the flex_attention implementation is wrong. It does not compile the flex_attention function from pytorch, which is necessary to reach fa2/3 performance. Even when using torch compile on the model itself I saw low mfu (15%) which suggested that torch compile break before reaching flex_attn call
This PR fixes it by adding a torch compile on the flex attention. Mfu increase to 42%.
The flex attention blog post explains that the speed up is mainly coming from the use of torch compile, and other reference
It seems that the llama transformers implementation is still underperforming compared to flex_attn on the torch titan codebase (reach 45% mfu that I cannot reach when using transformers llama).
IMO, it would be nice to be able to reach good performance on transformers llama as it is the default implementation of a lot of codebases. I think that there are a lot of graphs that break when using torch compile, probably because of some parameters that are not used during LLM training (attention_mask, etc).
PS the code that I used for benchmark is on this pr : PrimeIntellect-ai/prime#224
Before submitting
Pull Request section?
Who can review?
Models: