Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix to decay Policy for __result_and_scratch_storage #2031

Merged
merged 7 commits into from
Jan 31, 2025

Conversation

danhoeflinger
Copy link
Contributor

@danhoeflinger danhoeflinger commented Jan 30, 2025

Always decay the policy before use in __result_and_scratch_storage.

This also reverts some of #1997 which was working around this bug in the usage of __result_and_scratch_storage.
Also, there are some minor improvements to the template arguments of the reduce submitter to require the decayed policy used in __result_and_scratch_storage to match the policy used to submit.

@danhoeflinger danhoeflinger added this to the 2022.8.0 milestone Jan 30, 2025
Signed-off-by: Dan Hoeflinger <[email protected]>
(reverting part of #1997)

Signed-off-by: Dan Hoeflinger <[email protected]>
@danhoeflinger danhoeflinger marked this pull request as draft January 31, 2025 00:56
@danhoeflinger danhoeflinger marked this pull request as ready for review January 31, 2025 02:23
@@ -1369,8 +1371,8 @@ __parallel_partition_copy(oneapi::dpl::__internal::__device_backend_tag __backen
oneapi::dpl::__par_backend_hetero::__write_to_id_if_else<oneapi::dpl::__internal::__pstl_assign>;
try
{
return __parallel_reduce_then_scan_copy(__backend_tag, __exec, __rng, __result, __n, _GenMask{__pred},
_WriteOp{},
return __parallel_reduce_then_scan_copy(__backend_tag, std::forward<_ExecutionPolicy>(__exec), __rng,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

@@ -1543,17 +1546,18 @@ __parallel_set_op(oneapi::dpl::__internal::__device_backend_tag __backend_tag, c
{
try
{
return __parallel_set_reduce_then_scan(__backend_tag, __exec, __rng1, __rng2, __result, __comp,
__is_op_difference);
return __parallel_set_reduce_then_scan(__backend_tag, std::forward<_ExecutionPolicy>(__exec), __rng1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

@@ -2492,8 +2497,8 @@ __parallel_reduce_by_segment(oneapi::dpl::__internal::__device_backend_tag, cons
try
{
auto __res = oneapi::dpl::__par_backend_hetero::__parallel_reduce_by_segment_reduce_then_scan(
oneapi::dpl::__internal::__device_backend_tag{}, __exec, __keys, __values, __out_keys, __out_values,
__binary_pred, __binary_op);
oneapi::dpl::__internal::__device_backend_tag{}, std::forward<_ExecutionPolicy>(__exec), __keys,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

@SergeyKopienko
Copy link
Contributor

SergeyKopienko commented Jan 31, 2025

@danhoeflinger , you wrote in description of this PR:
"This also reverts some of #1997 which was working around this bug in the usage of __result_and_scratch_storage."

May you explain how this bug has been fixed on __result_and_scratch_storage level?
Looks like it hasn't been fixed...

@danhoeflinger
Copy link
Contributor Author

@danhoeflinger , you wrote in description of this PR: "This also reverts some of #1997 which was working around this bug in the usage of __result_and_scratch_storage."

May you explain how this bug has been fixed on __result_and_scratch_storage level? Looks like it hasn't been fixed...

The bug is in the usage of __result_and_scratch_storage, not the type itself. That part is fixed with this PR, because we decay the policy type everywhere we are specifying the type of __result_and_scratch_storage (not during overloads where we are instead deducing the type). We can still think about if there is a better way to "get" the type of __result_and_scratch_storage, without needing to always think about decaying of a policy.

Perhaps we can add a using statement to define __result_and_scratch_storage_t which automatically decays the policy type...

@danhoeflinger
Copy link
Contributor Author

danhoeflinger commented Jan 31, 2025

@mmichel11 @SergeyKopienko please look at #2033, do we like that additional change?

It makes usage easier, but obfuscates what is going on.

Edit: I've incorporated those changes. I think they are for the better on the whole.

* improve usage pattern for reduce_and_scratch_storage

Signed-off-by: Dan Hoeflinger <[email protected]>

* adjust names

Signed-off-by: Dan Hoeflinger <[email protected]>

* missed a couple names to change

Signed-off-by: Dan Hoeflinger <[email protected]>

* adding comment

Signed-off-by: Dan Hoeflinger <[email protected]>

* Formatting

Signed-off-by: Dan Hoeflinger <[email protected]>

---------

Signed-off-by: Dan Hoeflinger <[email protected]>
Copy link
Contributor

@mmichel11 mmichel11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@danhoeflinger danhoeflinger merged commit 7bbaf83 into main Jan 31, 2025
22 checks passed
@danhoeflinger danhoeflinger deleted the dev/dhoeflin/decay_policy_result_scratch branch January 31, 2025 19:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants