-
Notifications
You must be signed in to change notification settings - Fork 526
Reapply #9842: Save some size in dtype_util when dtype selective build is not in use #10490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/swolchok/429/head
Are you sure you want to change the base?
Conversation
Stack from ghstack (oldest at bottom): |
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10490
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 0277317 with merge base 1bd7260 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -285,6 +289,29 @@ store_compute_to_tensor_fn<CTYPE_COMPUTE> get_store_compute_to_tensor_fn( | |||
return nullptr; | |||
} | |||
|
|||
#ifndef EXECUTORCH_SELECTIVE_BUILD_DTYPE | |||
inline constexpr const char kGenericElementwiseOpName[] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
marking this inline
was needed for size (it changes the linkage for this, otherwise it's duplicated across translation units) and is a difference from the first attempt.
We duplicate a lot of functions depending on the operator name so that
dtype selective build will work. We can just detect if dtype selective
build is in use and, if not, stop duplicating.
Test Plan: compared results of bash test/build_optimized_size_test.sh before/after this rev.
Before:
After:
(This was reverted because the diff it was stacked on was a size
regression. Reversing the order instead this time around, and reverted
part of the change that was actually regressing size.)