Spark: long delay between jobs Spark: long delay between jobs hadoop hadoop

Spark: long delay between jobs


I/O operations often come with significant overhead that will occur on the master node. Since this work isn't parallelized, it can take quite a bit of time. And since it is not a job, it does not show up in the resource manager UI. Some examples of I/O tasks that are done by the master node

  • Spark will write to temporary s3 directories, then move the files using the master node
  • Reading of text files often occur on the master node
  • When writing parquet files, the master node will scan all the files post-write to check the schema

These issues can be solved by tweaking yarn settings or redesigning your code. If you provide some source code, I might be able to pinpoint your issue.

Discussion of writing I/O Overhead with Parquet and s3

Discussion of reading I/O Overhead "s3 is not a filesystem"


Problem:

I faced similar issue when writing parquet data on s3 with pyspark on EMR 5.5.1. All workers would finish writing data in _temporary bucket in output folder & Spark UI would show that all tasks have completed. But Hadoop Resource Manager UI would not release resources for the application neither mark it as complete. On checking s3 bucket, it seemed like spark driver was moving the files 1 by 1 from _temporary directory to output bucket which was extremely slow & all the cluster was idle except Driver node.

Solution:

The solution that worked for me was to use committer class by AWS ( EmrOptimizedSparkSqlParquetOutputCommitter ) by setting the configuration property spark.sql.parquet.fs.optimized.committer.optimization-enabled to true.

e.g.:

spark-submit ....... --conf spark.sql.parquet.fs.optimized.committer.optimization-enabled=true

or

pyspark ....... --conf spark.sql.parquet.fs.optimized.committer.optimization-enabled=true

Note that this property is available in EMR 5.19 or higher.

Result:

After running the spark job on EMR 5.20.0 using above solution, it did not create any _temporary directory & all the files were directly written to the output bucket, hence job finished very quickly.

Fore more details:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-optimized-committer.html