Passing Array to Spark Lit function
List comprehension inside Spark's array
a = [1,2,3,4,5,6,7,8,9,10]df = spark.createDataFrame([['a b c d e f g h i j '],], ['col1'])df = df.withColumn("NewColumn", F.array([F.lit(x) for x in a]))df.show(truncate=False)df.printSchema()# +--------------------+-------------------------------+# |col1 |NewColumn |# +--------------------+-------------------------------+# |a b c d e f g h i j |[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]|# +--------------------+-------------------------------+# root# |-- col1: string (nullable = true)# |-- NewColumn: array (nullable = false)# | |-- element: integer (containsNull = false)
@pault commented (Python 2.7):
You can hide the loop using
map
:df.withColumn("NewColumn", F.array(map(F.lit, a)))
@ abegehr added Python 3 version:
df.withColumn("NewColumn", F.array(*map(F.lit, a)))
Spark's udf
# Defining UDFdef arrayUdf(): return acallArrayUdf = F.udf(arrayUdf, T.ArrayType(T.IntegerType()))# Calling UDFdf = df.withColumn("NewColumn", callArrayUdf())
Output is the same.
In scala API, we can use "typedLit" function to add the Array or map values in the column.
// Ref : https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.functions$
Here is the sample code to add an Array or Map as a column value.
import org.apache.spark.sql.functions.typedLitval df1 = Seq((1, 0), (2, 3)).toDF("a", "b")df1.withColumn("seq", typedLit(Seq(1,2,3))) .withColumn("map", typedLit(Map(1 -> 2))) .show(truncate=false)
// Output
+---+---+---------+--------+|a |b |seq |map |+---+---+---------+--------+|1 |0 |[1, 2, 3]|[1 -> 2]||2 |3 |[1, 2, 3]|[1 -> 2]|+---+---+---------+--------+
I hope this helps.