How can I use threading in Python? How can I use threading in Python? multithreading multithreading

How can I use threading in Python?


Since this question was asked in 2010, there has been real simplification in how to do simple multithreading with Python with map and pool.

The code below comes from an article/blog post that you should definitely check out (no affiliation) - Parallelism in one line: A Better Model for Day to Day Threading Tasks. I'll summarize below - it ends up being just a few lines of code:

from multiprocessing.dummy import Pool as ThreadPoolpool = ThreadPool(4)results = pool.map(my_function, my_array)

Which is the multithreaded version of:

results = []for item in my_array:    results.append(my_function(item))

Description

Map is a cool little function, and the key to easily injecting parallelism into your Python code. For those unfamiliar, map is something lifted from functional languages like Lisp. It is a function which maps another function over a sequence.

Map handles the iteration over the sequence for us, applies the function, and stores all of the results in a handy list at the end.

Enter image description here


Implementation

Parallel versions of the map function are provided by two libraries:multiprocessing, and also its little known, but equally fantastic step child:multiprocessing.dummy.

multiprocessing.dummy is exactly the same as multiprocessing module, but uses threads instead (an important distinction - use multiple processes for CPU-intensive tasks; threads for (and during) I/O):

multiprocessing.dummy replicates the API of multiprocessing, but is no more than a wrapper around the threading module.

import urllib2from multiprocessing.dummy import Pool as ThreadPoolurls = [  'http://www.python.org',  'http://www.python.org/about/',  'http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html',  'http://www.python.org/doc/',  'http://www.python.org/download/',  'http://www.python.org/getit/',  'http://www.python.org/community/',  'https://wiki.python.org/moin/',]# Make the Pool of workerspool = ThreadPool(4)# Open the URLs in their own threads# and return the resultsresults = pool.map(urllib2.urlopen, urls)# Close the pool and wait for the work to finishpool.close()pool.join()

And the timing results:

Single thread:   14.4 seconds       4 Pool:   3.1 seconds       8 Pool:   1.4 seconds      13 Pool:   1.3 seconds

Passing multiple arguments (works like this only in Python 3.3 and later):

To pass multiple arrays:

results = pool.starmap(function, zip(list_a, list_b))

Or to pass a constant and an array:

results = pool.starmap(function, zip(itertools.repeat(constant), list_a))

If you are using an earlier version of Python, you can pass multiple arguments via this workaround).

(Thanks to user136036 for the helpful comment.)


Here's a simple example: you need to try a few alternative URLs and return the contents of the first one to respond.

import Queueimport threadingimport urllib2# Called by each threaddef get_url(q, url):    q.put(urllib2.urlopen(url).read())theurls = ["http://google.com", "http://yahoo.com"]q = Queue.Queue()for u in theurls:    t = threading.Thread(target=get_url, args = (q,u))    t.daemon = True    t.start()s = q.get()print s

This is a case where threading is used as a simple optimization: each subthread is waiting for a URL to resolve and respond, to put its contents on the queue; each thread is a daemon (won't keep the process up if the main thread ends -- that's more common than not); the main thread starts all subthreads, does a get on the queue to wait until one of them has done a put, then emits the results and terminates (which takes down any subthreads that might still be running, since they're daemon threads).

Proper use of threads in Python is invariably connected to I/O operations (since CPython doesn't use multiple cores to run CPU-bound tasks anyway, the only reason for threading is not blocking the process while there's a wait for some I/O). Queues are almost invariably the best way to farm out work to threads and/or collect the work's results, by the way, and they're intrinsically threadsafe, so they save you from worrying about locks, conditions, events, semaphores, and other inter-thread coordination/communication concepts.


NOTE: For actual parallelization in Python, you should use the multiprocessing module to fork multiple processes that execute in parallel (due to the global interpreter lock, Python threads provide interleaving, but they are in fact executed serially, not in parallel, and are only useful when interleaving I/O operations).

However, if you are merely looking for interleaving (or are doing I/O operations that can be parallelized despite the global interpreter lock), then the threading module is the place to start. As a really simple example, let's consider the problem of summing a large range by summing subranges in parallel:

import threadingclass SummingThread(threading.Thread):     def __init__(self,low,high):         super(SummingThread, self).__init__()         self.low=low         self.high=high         self.total=0     def run(self):         for i in range(self.low,self.high):             self.total+=ithread1 = SummingThread(0,500000)thread2 = SummingThread(500000,1000000)thread1.start() # This actually causes the thread to runthread2.start()thread1.join()  # This waits until the thread has completedthread2.join()# At this point, both threads have completedresult = thread1.total + thread2.totalprint result

Note that the above is a very stupid example, as it does absolutely no I/O and will be executed serially albeit interleaved (with the added overhead of context switching) in CPython due to the global interpreter lock.