Does Spark use data locality? Does Spark use data locality? hadoop hadoop

Does Spark use data locality?


If you use Spark and Cassandra on the same physical machine, you should check out spark-cassandra-connector It will ensure data locality for both reads and writes.

For example, if you load a Cassandra table into an RDD, the connector will always try to do the operations on this RDD locally on each node.And when you save the RDD into Cassandra, the connector will also try to save results locally as well.

This assuming that your data is already balanced across your Cassandra cluster. If your PartitionKey is not done correctly, you will end up with an unbalanced cluster anyway.

Also be aware of shuffling jobs on Spark. For example, if you perform a ReduceByKey on an RDD, you'll end up streaming data across the network anyway. So, always plan these jobs carefully.