What is the most efficient way to create new Spark Tables or Data Frames in Sparklyr? What is the most efficient way to create new Spark Tables or Data Frames in Sparklyr? hadoop hadoop

What is the most efficient way to create new Spark Tables or Data Frames in Sparklyr?


sdf_register is not very useful when dealing with long running queries. It is basically an unmaterialized view which means it runs the underlying query each time you call it. Adding the following will write the data to Hive as a table.

spark_dataframe %>% invoke("write") %>% invoke("saveAsTable", as.character("your_desired_table_name"))

This uses saveAsTable as table which will create a table in have and keep that table even after the Spark session ends. Using createOrReplaceTempView does not persist the data when the Spark session ends.