-
Notifications
You must be signed in to change notification settings - Fork 28.5k
[SPARK-52006][SQL][CORE] Exclude CollectMetricsExec accumulator from Spark UI + event logs + metric heartbeats #50812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
cc @cloud-fan can you please take a look? Thanks. |
cloud-fan
reviewed
May 8, 2025
sql/core/src/test/scala/org/apache/spark/sql/util/DataFrameCallbackSuite.scala
Show resolved
Hide resolved
cloud-fan
approved these changes
May 8, 2025
cloud-fan
approved these changes
May 8, 2025
thanks, merging to master/4.0! |
cloud-fan
pushed a commit
that referenced
this pull request
May 9, 2025
…Spark UI + event logs + metric heartbeats ### What changes were proposed in this pull request? [CollectMetricsExec](https://github.com/apache/spark/blob/1d0e82fb59afbc8d846470b980c3e926ad91513b/sql/core/src/main/scala/org/apache/spark/sql/execution/CollectMetricsExec.scala#L33) uses an [AggregatingAccumulator](https://github.com/apache/spark/blob/1d0e82fb59afbc8d846470b980c3e926ad91513b/sql/core/src/main/scala/org/apache/spark/sql/execution/AggregatingAccumulator.scala#L33) for emitting metrics in a side-channel that can be collected to the driver. And there are a few issues with this: - It is a named accumulator but not an internal accumulator, so its result is shown in the Spark UI. But that result isn’t meaningful to users because it’s the `.toString` representation of an UnsafeRow representing an aggregation buffer. - This same string representation is also logged in event logs on a per-task basis which is not necessary. - **Last and the most important** is that there could be a race condition issue serializing the accumulator between task result serialization and executor heartbeat (heartbeat serialize it firstly and then task result serialization happens) which would lead to task failures since [TypedImperativeAggregate.serializeAggregateBufferInPlace](https://github.com/apache/spark/blob/1d0e82fb59afbc8d846470b980c3e926ad91513b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/interfaces.scala#L620) is not idempotent. <img width="1726" alt="Error Stack" src="https://github.com/user-attachments/assets/c86bc38e-8e97-495a-8dbd-9dc9e84bf6c4" /> To fix the issues, below changes are made in this PR: - Mark the accumulator as internal metrics to exclude it from Spark UI. - JsonProtocol to explicitly exclude this accumulator from event logs. - Executor to explicitly exclude it from heartbeats. ### Why are the changes needed? Fixing the race condition issues and remove unnecessary information from Spark UI and event log. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? UT added. Manually tested with below repro in standalone cluster since the issue can't be triggered with local mode: ``` import org.apache.spark.TaskContext val df = spark.range(100) .mapPartitions { iter => TaskContext.get().addTaskCompletionListener[Unit] { _ => Thread.sleep(30000L) } iter }.toDF("id") .observe( name = "my_event", max($"id").as("max_val"), percentile_approx($"id", lit(0.5), lit(100)), percentile_approx($"id", lit(0.5), lit(100)), min($"id").as("min_val")) df.collect() ``` ### Was this patch authored or co-authored using generative AI tooling? No Closes #50812 from ivoson/SPARK-52006. Authored-by: Tengfei Huang <[email protected]> Signed-off-by: Wenchen Fan <[email protected]> (cherry picked from commit e74c5c4) Signed-off-by: Wenchen Fan <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
CollectMetricsExec uses an AggregatingAccumulator for emitting metrics in a side-channel that can be collected to the driver. And there are a few issues with this:
.toString
representation of an UnsafeRow representing an aggregation buffer.To fix the issues, below changes are made in this PR:
Why are the changes needed?
Fixing the race condition issues and remove unnecessary information from Spark UI and event log.
Does this PR introduce any user-facing change?
No
How was this patch tested?
UT added.
Manually tested with below repro in standalone cluster since the issue can't be triggered with local mode:
Was this patch authored or co-authored using generative AI tooling?
No