How do I run graphx with Python / pyspark? How do I run graphx with Python / pyspark? hadoop hadoop

How do I run graphx with Python / pyspark?


It looks like the python bindings to GraphX are delayed at least to Spark 1.4 1.5 ∞. It is waiting behind the Java API.

You can track the status at SPARK-3789 GRAPHX Python bindings for GraphX - ASF JIRA


You should look at GraphFrames (https://github.com/graphframes/graphframes), which wraps GraphX algorithms under the DataFrames API and it provides Python interface.

Here is a quick example from https://graphframes.github.io/graphframes/docs/_site/quick-start.html, with slight modification so that it works

first start pyspark with the graphframes pkg loaded

pyspark --packages graphframes:graphframes:0.1.0-spark1.6

python code:

from graphframes import *# Create a Vertex DataFrame with unique ID column "id"v = sqlContext.createDataFrame([  ("a", "Alice", 34),  ("b", "Bob", 36),  ("c", "Charlie", 30),], ["id", "name", "age"])# Create an Edge DataFrame with "src" and "dst" columnse = sqlContext.createDataFrame([  ("a", "b", "friend"),  ("b", "c", "follow"),  ("c", "b", "follow"),], ["src", "dst", "relationship"])# Create a GraphFrameg = GraphFrame(v, e)# Query: Get in-degree of each vertex.g.inDegrees.show()# Query: Count the number of "follow" connections in the graph.g.edges.filter("relationship = 'follow'").count()# Run PageRank algorithm, and show results.results = g.pageRank(resetProbability=0.01, maxIter=20)results.vertices.select("id", "pagerank").show()


GraphX 0.9.0 doesn't have python API yet. It's expected in upcoming releases.