How to use azure-sqldb-spark connector in pyspark How to use azure-sqldb-spark connector in pyspark azure azure

How to use azure-sqldb-spark connector in pyspark


The Spark connector currently (as of march 2019) only supports the Scala API (as documented here).So if you are working in a notebook, you could do all the preprocessing in python, finally register the dataframe as a temp table, e. g. :

df.createOrReplaceTempView('testbulk')

and have to do the final step in Scala:

%scala//configs...spark.table("testbulk").bulkCopyToSqlDB(bulkCopyConfig)