writing a csv with column names and reading a csv file which is being generated from a sparksql dataframe in Pyspark writing a csv with column names and reading a csv file which is being generated from a sparksql dataframe in Pyspark python python

writing a csv with column names and reading a csv file which is being generated from a sparksql dataframe in Pyspark


Try

df.coalesce(1).write.format('com.databricks.spark.csv').save('path+my.csv',header = 'true')

Note that this may not be an issue on your current setup, but on extremely large datasets, you can run into memory problems on the driver. This will also take longer (in a cluster scenario) as everything has to push back to a single location.


Just in case, on spark 2.1 you can create a single csv file with the following lines

dataframe.coalesce(1) //So just a single part- file will be created.write.mode(SaveMode.Overwrite).option("mapreduce.fileoutputcommitter.marksuccessfuljobs","false") //Avoid creating of crc files.option("header","true") //Write the header.csv("csvFullPath")


with spark >= 2.o, we can do something like

df = spark.read.csv('path+filename.csv', sep = 'ifany',header='true')df.write.csv('path_filename of csv',header=True) ###yes still in partitionsdf.toPandas().to_csv('path_filename of csv',index=False)  ###single csv(Pandas Style)