How do you setup multiple Spark Streaming jobs with different batch durations? How do you setup multiple Spark Streaming jobs with different batch durations? hadoop hadoop

How do you setup multiple Spark Streaming jobs with different batch durations?


In my experience, different streams have different tuning requirements. Throughput, latency, capacity of the receiving side, SLAs to be respected, etc.

To cater for that multiplicity, we require to configure each Spark Streaming job to address said specificity. So, not only batch interval but also resources like memory and cpu, data partitioning, # of executing nodes (when the loads are network bound).

It follows that each Spark Streaming job becomes a separate job deployment on a Spark Cluster. That will also allow for monitoring and management of separate pipelines independently of each other and help in the further fine-tuning of the processes.

In our case, we use Mesos + Marathon to manage our set of Spark Streaming jobs running 3600x24x7.