Skip to content

[SPARK-45979][INFRA] Add Python 3.11 to Infra docker image #123

[SPARK-45979][INFRA] Add Python 3.11 to Infra docker image

[SPARK-45979][INFRA] Add Python 3.11 to Infra docker image #123

Triggered via push November 17, 2023 20:07
Status Success
Total duration 4m 1s
Artifacts
Fit to window
Zoom out
Zoom in

Annotations

1 error
KafkaSourceStressForDontFailOnDataLossSuite.stress test for failOnDataLoss=false: KafkaSourceStressForDontFailOnDataLossSuite#L1
org.apache.spark.sql.streaming.StreamingQueryException: [STREAM_FAILED] Query [id = 0164970a-abe8-4980-a53f-d5fbdddae3ee, runId = d053776e-5b1a-4aa6-b3d6-241c0aa23c41] terminated with exception: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. SQLSTATE: XXKST === Streaming Query === Identifier: [id = 0164970a-abe8-4980-a53f-d5fbdddae3ee, runId = d053776e-5b1a-4aa6-b3d6-241c0aa23c41] Current Committed Offsets: {KafkaV2[SubscribePattern[failOnDataLoss.*]]: {"failOnDataLoss-2":{"0":24},"failOnDataLoss-0":{"0":10}}} Current Available Offsets: {KafkaV2[SubscribePattern[failOnDataLoss.*]]: {"failOnDataLoss-2":{"0":24},"failOnDataLoss-0":{"0":10}}} Current State: ACTIVE Thread State: RUNNABLE Logical Plan: WriteToMicroBatchDataSource ForeachWriterTable(org.apache.spark.sql.kafka010.KafkaSourceStressForDontFailOnDataLossSuite$$anon$2@19a9175f,Left(class[value[0]: int])), 0164970a-abe8-4980-a53f-d5fbdddae3ee, Append +- SerializeFromObject [input[0, int, false] AS value#56101] +- MapElements org.apache.spark.sql.kafka010.KafkaSourceStressForDontFailOnDataLossSuite$$Lambda$7622/0x00007f5f95822708@1e064d9a, class scala.Tuple2, [StructField(_1,StringType,true), StructField(_2,StringType,true)], obj#56100: int +- DeserializeToObject newInstance(class scala.Tuple2), obj#56099: scala.Tuple2 +- Project [cast(key#56075 as string) AS key#56089, cast(value#56076 as string) AS value#56090] +- StreamingDataSourceV2Relation [key#56075, value#56076, topic#56077, partition#56078, offset#56079L, timestamp#56080, timestampType#56081], org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaScan@7e1fd137, KafkaV2[SubscribePattern[failOnDataLoss.*]]