Writing more than 50 millions from Pyspark df to PostgresSQL, best efficient approach Writing more than 50 millions from Pyspark df to PostgresSQL, best efficient approach postgresql postgresql

Writing more than 50 millions from Pyspark df to PostgresSQL, best efficient approach


I actually did kind of the same work a while ago but using Apache Sqoop.

I would say that for answering this questions we have to try to optimize the communication between Spark and PostgresSQL, specifically the data flowing from Spark to PostgreSql.

But be careful, do not forget Spark side. It does not make sense to execute mapPartitions if the number of partitions is too high compared with the number of maximum connections whichPostgreSQL support, if you have too many partitions and you are opening a connection for each one, you will probably have the following error org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.

In order to tune the insertion process I would approach the problem following the next steps:

  • Remember the number of partitions is important. Check the number of partitions and then adjust it based on the number of parallel connection you want to have. You might want to have one connection per partition, so I would suggest to check coalesce, as is mentioned here.
  • Check the max number of connections which your postgreSQL instance support and you want to increase the number.
  • For inserting data into PostgreSQL is recommended using COPY command. Here is also a more elaborated answer about how to speed up postgreSQL insertion.

Finally, there is no silver bullet to do this job. You can use all the tips I mentioned above but it will really depends on your data and use cases.