-
Notifications
You must be signed in to change notification settings - Fork 502
[PyTorch] CPU Overhead Micro-optimizations #2146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: zhongboz <[email protected]>
Signed-off-by: zhongboz <[email protected]>
Signed-off-by: zhongboz <[email protected]>
81adf6d
to
dc2532a
Compare
Signed-off-by: zhongboz <[email protected]>
@@ -641,11 +641,15 @@ void nvte_destroy_quantization_config(NVTEQuantizationConfig config) { | |||
} | |||
|
|||
int nvte_is_non_tn_fp8_gemm_supported() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't handle the case where we have multiple GPUs with different archs. We could add an arg for the device ID, but that just pushes the CPU overhead problem somewhere else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but we didn't really support this case anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For topology like 1 CPU 8/4GPUs with homogenous GPU arch, we can cache the TN layout check.
with torch.cuda.device( | ||
getattr(self, list(self.named_parameters())[0][0]).device | ||
), self.prepare_forward( | ||
if is_first_microbatch is None or is_first_microbatch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we assume we can skip setting the device if is_first_microbatch=False
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that the device won't change across microbatches in a global batch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since in a CPU bounded fwd only case, skipping set device for every single forward pass could account for 10% perf difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach is really ad hoc. Personally, I think it would be better to not to support the multi-device case (basically revert #1974) than to have inconsistent multi-device support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with it, but not sure if there are any potential impact for customers using this feature?
Signed-off-by: zhongboz <[email protected]>
/te-ci pytorch L1 |
/te-ci pytorch L1 |
Description
Motivation: #2053
Fixes # (issue)
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: