How do I read a parquet in PySpark written from Spark? How do I read a parquet in PySpark written from Spark? python python

How do I read a parquet in PySpark written from Spark?


I read parquet file in the following way:

from pyspark.sql import SparkSession# initialise sparkContextspark = SparkSession.builder \    .master('local') \    .appName('myAppName') \    .config('spark.executor.memory', '5gb') \    .config("spark.cores.max", "6") \    .getOrCreate()sc = spark.sparkContext# using SQLContext to read parquet filefrom pyspark.sql import SQLContextsqlContext = SQLContext(sc)# to read parquet filedf = sqlContext.read.parquet('path-to-file/commentClusters.parquet')


You can use parquet format of Spark Session to read parquet files. Like this:

df = spark.read.parquet("swift2d://xxxx.keystone/commentClusters.parquet")

Although, there is no difference between parquet and load functions. It might be the case that load is not able to infer the schema of data in the file (eg, some data type which is not identifiable by load or specific to parquet).