Spark write data into partitioned Hive table very slow Spark write data into partitioned Hive table very slow hadoop hadoop

Spark write data into partitioned Hive table very slow


When you create the table explicitly then that DDL defines the table.Normally text file is the default in Hive but it could have been changed in your environment.

Add "STORED AS TEXTFILE" at the end of the CREATE statement to make sure the table is plain text.