-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-49723][SQL] Add Variant metrics to the JSON File Scan node #48172
base: master
Are you sure you want to change the base?
[SPARK-49723][SQL] Add Variant metrics to the JSON File Scan node #48172
Conversation
common/variant/src/main/java/org/apache/spark/types/variant/VariantBuilder.java
Outdated
Show resolved
Hide resolved
|
||
/** Only report variant metrics if the data source file format is JSON */ | ||
override lazy val metrics: Map[String, SQLMetric] = super.metrics ++ { | ||
if (relation.fileFormat.isInstanceOf[JsonFileFormat]) variantBuilderMetrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does json scan always produce variants? If not we should only display these metrics when variant will be produced by the scan.
val readFile: (PartitionedFile) => Iterator[InternalRow] = { | ||
val hadoopConf = relation.sparkSession.sessionState.newHadoopConfWithOptions(relation.options) | ||
relation.fileFormat match { | ||
case f: JsonFileFormat => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably make it more general and allow FileFormat
implementations to report additional metrics.
What changes were proposed in this pull request?
This pull request adds the following metrics to JSON file scan nodes to collect metrics related to variants being constructed as part of the scan:
Top level and nested variant metrics are separated as they can have different usage patterns.
singleVariantColumn
scans and columns in user-provided schema scans where the column type is a top level variant (not variant nested in a struct/array/map) are considered to be top level variants while variants nested in other data types are considered to be nested variants.Why are the changes needed?
This change allows users to collect metrics on variant usage to better monitor their data/workloads.
Does this PR introduce any user-facing change?
Users will now be able to see variant metrics in JSON scan nodes which were not available earlier.
How was this patch tested?
Comprehensive unit tests in VariantEndToEndSuite.scala
Was this patch authored or co-authored using generative AI tooling?
Yes, got some help related to scala syntax.
Generated by: ChatGPT 4o, GitHub CoPilot.