Python Multiprocessing with Distributed Cluster Python Multiprocessing with Distributed Cluster python python

Python Multiprocessing with Distributed Cluster


If you want a very easy solution, there isn't one.

However, there is a solution that has the multiprocessing interface -- pathos -- which has the ability to establish connections to remote servers through a parallel map, and to do multiprocessing.

If you want to have a ssh-tunneled connection, you can do that… or if you are ok with a less secure method, you can do that too.

>>> # establish a ssh tunnel>>> from pathos.core import connect>>> tunnel = connect('remote.computer.com', port=1234)>>> tunnel       Tunnel('-q -N -L55774:remote.computer.com:1234 remote.computer.com')>>> tunnel._lport55774>>> tunnel._rport1234>>> >>> # define some function to run in parallel>>> def sleepy_squared(x):...   from time import sleep...   sleep(1.0)...   return x**2... >>> # build a pool of servers and execute the parallel map>>> from pathos.pp import ParallelPythonPool as Pool>>> p = Pool(8, servers=('localhost:55774',))>>> p.servers('localhost:55774',)>>> y = p.map(sleepy_squared, x)>>> y[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

Or, instead you could configure for a direct connection (no ssh)

>>> p = Pool(8, servers=('remote.computer.com:5678',))# use an asynchronous parallel map>>> res = p.amap(sleepy_squared, x)>>> res.get()[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

It's all a bit finicky, for the remote server to work, you have to start a server running on remote.computer.com at the specified port beforehand -- and you have to make sure that both the settings on your localhost and the remote host are going to allow either the direct connection or the ssh-tunneled connection. Plus, you need to have the same version of pathos and of the pathos fork of pp running on each host. Also, for ssh, you need to have ssh-agent running to allow password-less login with ssh.

But then, hopefully it all works… if your function code can be transported over to the remote host with dill.source.importable.

FYI, pathos is long overdue a release, and basically, there are a few bugs and interface changes that need to be resolved before a new stable release is cut.


I'd suggest taking a look at Ray, which aims to do exactly that.

Ray uses the same syntax to parallelize code in the single machine multicore setting as it does in the distributed setting. If you're willing to use a for loop instead of a map call, then your example would look like the following.

import rayimport timeray.init()@ray.remotedef function(x):    time.sleep(0.1)    return xarglist = [1, 2, 3, 4]result_ids = [function.remote(x) for x in arglist]resultlist = ray.get(result_ids)

That will run four tasks in parallel using however many cores you have locally. To run the same example on a cluster, the only line that would change would be the call to ray.init(). The relevant documentation can be found here.

Note that I'm helping to develop Ray.


A little late to the party here, but since I was also looking for a similar solution, and this question is still not marked as answered, I thought I would contribute my findings.

I ended up using SCOOP. It provides a parallel map implementation that can work across multiple cores, across multiple hosts. It can also fall back to Python's serial map function if desired during invocation.

From SCOOP's introduction page, it cites the following features:

SCOOP features and advantages over futures, multiprocessing and similar modules are as follows:

  • Harness the power of multiple computers over network;
  • Ability to spawn multiple tasks inside a task;
  • API compatible with PEP-3148;
  • Parallelizing serial code with only minor modifications;
  • Efficient load-balancing.

It does have some quirks (functions/classes must be pickleable), and the setup to get things running smoothly across multiple hosts can be tedious if they don't all share the same filesystem schema, but overall I'm quite happy with the results. For our purposes, doing quite a bit of Numpy & Cython, it provides excellent performance.

Hope this helps.