How to join on multiple columns in Pyspark? How to join on multiple columns in Pyspark? python python

How to join on multiple columns in Pyspark?


You should use & / | operators and be careful about operator precedence (== has lower precedence than bitwise AND and OR):

df1 = sqlContext.createDataFrame(    [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],    ("x1", "x2", "x3"))df2 = sqlContext.createDataFrame(    [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x3"))df = df1.join(df2, (df1.x1 == df2.x1) & (df1.x2 == df2.x2))df.show()## +---+---+---+---+---+---+## | x1| x2| x3| x1| x2| x3|## +---+---+---+---+---+---+## |  2|  b|3.0|  2|  b|0.0|## +---+---+---+---+---+---+


An alternative approach would be:

df1 = sqlContext.createDataFrame(    [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],    ("x1", "x2", "x3"))df2 = sqlContext.createDataFrame(    [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x4"))df = df1.join(df2, ['x1','x2'])df.show()

which outputs:

+---+---+---+---+| x1| x2| x3| x4|+---+---+---+---+|  2|  b|3.0|0.0|+---+---+---+---+

With the main advantage being that the columns on which the tables are joined are not duplicated in the output, reducing the risk of encountering errors such as org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.


Whenever the columns in the two tables have different names, (let's say in the example above, df2 has the columns y1, y2 and y4), you could use the following syntax:

df = df1.join(df2.withColumnRenamed('y1','x1').withColumnRenamed('y2','x2'), ['x1','x2'])


test = numeric.join(Ref,    on=[     numeric.ID == Ref.ID,      numeric.TYPE == Ref.TYPE,     numeric.STATUS == Ref.STATUS    ], how='inner')