-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Open
Labels
Description
What would you like to happen?
Currently, SparkReceiverIO reads data using a single worker because the Read transform initializes with Impulse.create(), which produces a single initial element. This creates a scalability bottleneck as all data ingestion is constrained to one machine, regardless of the available worker pool.
I would like to implement a parallel reading mechanism in SparkReceiverIO. This involves:
- Adding a withNumReaders(int) configuration method to the builder.
- Refactoring the implementation to use
Create.of(shards)followed byReshufflewhennumReaders > 1is specified. - Ensuring backward compatibility by defaulting to the single-reader behavior if
numReadersis unnecessary.
This enhancement will allow SparkReceiverIO to scale horizontally, significantly increasing throughput for high-volume use cases.
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
- Component: Python SDK
- Component: Java SDK
- Component: Go SDK
- Component: Typescript SDK
- Component: IO connector
- Component: Beam YAML
- Component: Beam examples
- Component: Beam playground
- Component: Beam katas
- Component: Website
- Component: Infrastructure
- Component: Spark Runner
- Component: Flink Runner
- Component: Samza Runner
- Component: Twister2 Runner
- Component: Hazelcast Jet Runner
- Component: Google Cloud Dataflow Runner