Skip to content

Conversation

@samsja
Copy link

@samsja samsja commented Mar 1, 2025

fix flex attention implementation

I encounter terrible performance when training Llama 8b transformer with flex_attention. Got 15% mfu on a workload where I usually have 45%. Switching to flash_attention_2 fixed the issue.

config_model = LlamaConfig.from_pretrained("meta-llama/Meta-Llama-3-8B", attn_implementation="flex_attention")
model = LlamaForCausalLM.from_pretrained(pretrained_model_name_or_path="meta-llama/Meta-Llama-3-8B", config=config_model)

It seems that the flex_attention implementation is wrong. It does not compile the flex_attention function from pytorch, which is necessary to reach fa2/3 performance. Even when using torch compile on the model itself I saw low mfu (15%) which suggested that torch compile break before reaching flex_attn call

This PR fixes it by adding a torch compile on the flex attention. Mfu increase to 42%.

Screenshot from 2025-02-28 18-33-39

The flex attention blog post explains that the speed up is mainly coming from the use of torch compile, and other reference

It seems that the llama transformers implementation is still underperforming compared to flex_attn on the torch titan codebase (reach 45% mfu that I cannot reach when using transformers llama).

IMO, it would be nice to be able to reach good performance on transformers llama as it is the default implementation of a lot of codebases. I think that there are a lot of graphs that break when using torch compile, probably because of some parameters that are not used during LLM training (attention_mask, etc).

PS the code that I used for benchmark is on this pr : PrimeIntellect-ai/prime#224

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?

Who can review?

Models:

Signed-off-by: Sami Jaghouar <[email protected]>

torch compiled flash attention

Signed-off-by: Sami Jaghouar <[email protected]>
@github-actions github-actions bot marked this pull request as draft March 1, 2025 02:44
@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2025

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. When it is ready for review, please click the Ready for review button (at the bottom of the PR page).

@samsja samsja marked this pull request as ready for review March 1, 2025 02:44
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep indeed, we have something that should fix it : #36103 will be merged next week!

@samsja
Copy link
Author

samsja commented Mar 1, 2025

ah nice will close this PR then

@samsja samsja closed this Mar 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants