torch.compile + CUDA Graph optimization for bs=1#328
torch.compile + CUDA Graph optimization for bs=1#328YJYJLee wants to merge 1 commit intofacebookresearch:mainfrom
Conversation
|
Hi @YJYJLee! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
hello @YJYJLee, Could you please indicate your torch version? And what other modifications should we apply among this to other libraries like fairseq? |
PR request for Pytorch blog post.
Summary:
This post is the fourth part of a multi-series blog focused on how to accelerate generative AI models with pure, native PyTorch. In this blog, we’ll focus on speeding up FAIR’s Seamless M4T-v2 model resulting in 2x speedup for text decoder module and 30x for vocoder module, resulting in 2.7x speedup for end-to-end inference, with no loss of accuracy by using CUDA Graph and native PyTorch optimization: torch.compile.