Create Spark DataFrame. Can not infer schema for type: <type 'float'> Create Spark DataFrame. Can not infer schema for type: <type 'float'> python python

Create Spark DataFrame. Can not infer schema for type: <type 'float'>


SparkSession.createDataFrame, which is used under the hood, requires an RDD / list of Row/tuple/list/dict* or pandas.DataFrame, unless schema with DataType is provided. Try to convert float to tuple like this:

myFloatRdd.map(lambda x: (x, )).toDF()

or even better:

from pyspark.sql import Rowrow = Row("val") # Or some other column namemyFloatRdd.map(row).toDF()

To create a DataFrame from a list of scalars you'll have to use SparkSession.createDataFrame directly and provide a schema***:

from pyspark.sql.types import FloatTypedf = spark.createDataFrame([1.0, 2.0, 3.0], FloatType())df.show()## +-----+## |value|## +-----+## |  1.0|## |  2.0|## |  3.0|## +-----+

but for a simple range it would be better to use SparkSession.range:

from pyspark.sql.functions import colspark.range(1, 4).select(col("id").cast("double"))

* No longer supported.

** Spark SQL also provides a limited support for schema inference on Python objects exposing __dict__.

*** Supported only in Spark 2.0 or later.


from pyspark.sql.types import IntegerType, Rowmylist = [1, 2, 3, 4, None ]l = map(lambda x : Row(x), mylist)# notice the parens after the type namedf=spark.createDataFrame(l,["id"])df.where(df.id.isNull() == False).show()

Basiclly, you need to init your int into Row(), then we can use the schema


Inferring the Schema Using Reflection
from pyspark.sql import Row# spark - sparkSessionsc = spark.sparkContext# Load a text file and convert each line to a Row.orders = sc.textFile("/practicedata/orders")#Split on delimitersparts = orders.map(lambda l: l.split(","))#Convert to Roworders_struct = parts.map(lambda p: Row(order_id=int(p[0]), order_date=p[1], customer_id=p[2], order_status=p[3]))for i in orders_struct.take(5): print(i)#convert the RDD to DataFrameorders_df = spark.createDataFrame(orders_struct)
Programmatically Specifying the Schema
from pyspark.sql import Row# spark - sparkSessionsc = spark.sparkContext# Load a text file and convert each line to a Row.orders = sc.textFile("/practicedata/orders")#Split on delimitersparts = orders.map(lambda l: l.split(","))#Convert to tupleorders_struct = parts.map(lambda p: (p[0], p[1], p[2], p[3].strip()))#convert the RDD to DataFrameorders_df = spark.createDataFrame(orders_struct)# The schema is encoded in a string.schemaString = "order_id order_date customer_id status"fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]schema = StructordersDf = spark.createDataFrame(orders_struct, schema)

Type(fields)