how to enable Apache Arrow in Pyspark how to enable Apache Arrow in Pyspark pandas pandas

how to enable Apache Arrow in Pyspark


We made a change in 0.15.0 that makes the default behavior of pyarrow incompatible with older versions of Arrow in Java -- your Spark environment seems to be using an older version.

Your options are

  • Set the environment variable ARROW_PRE_0_15_IPC_FORMAT=1 from where you are using Python
  • Downgrade to pyarrow < 0.15.0 for now.


For calling my pandas UDF in my Spark 2.4.4 cluster with pyarrow==0.15. I struggled with setting the ARROW_PRE_0_15_IPC_FORMAT=1 flag as mentioned above successfully.

I set the flag in (1) the command line via export on the head node, (2) via spark-env.sh and yarn-env.sh on all nodes in the cluster, and (3) in the pyspark code itself from my script on the head node. None of these worked to actually set this flag inside of the udf, for unknown reasons.

The simplest solution I found was to call this inside the udf:

    @pandas_udf("integer", PandasUDFType.SCALAR)    def foo(*args):        import os        os.environ["ARROW_PRE_0_15_IPC_FORMAT"] = "1"        #...

Hopefully this saves someone else several hours.