Greenlet Vs. Threads Greenlet Vs. Threads python python

Greenlet Vs. Threads


Greenlets provide concurrency but not parallelism. Concurrency is when code can run independently of other code. Parallelism is the execution of concurrent code simultaneously. Parallelism is particularly useful when there's a lot of work to be done in userspace, and that's typically CPU-heavy stuff. Concurrency is useful for breaking apart problems, enabling different parts to be scheduled and managed more easily in parallel.

Greenlets really shine in network programming where interactions with one socket can occur independently of interactions with other sockets. This is a classic example of concurrency. Because each greenlet runs in its own context, you can continue to use synchronous APIs without threading. This is good because threads are very expensive in terms of virtual memory and kernel overhead, so the concurrency you can achieve with threads is significantly less. Additionally, threading in Python is more expensive and more limited than usual due to the GIL. Alternatives to concurrency are usually projects like Twisted, libevent, libuv, node.js etc, where all your code shares the same execution context, and register event handlers.

It's an excellent idea to use greenlets (with appropriate networking support such as through gevent) for writing a proxy, as your handling of requests are able to execute independently and should be written as such.

Greenlets provide concurrency for the reasons I gave earlier. Concurrency is not parallelism. By concealing event registration and performing scheduling for you on calls that would normally block the current thread, projects like gevent expose this concurrency without requiring change to an asynchronous API, and at significantly less cost to your system.


Correcting for @TemporalBeing 's answer above, greenlets are not "faster" than threads and it is an incorrect programming technique to spawn 60000 threads to solve a concurrency problem, a small pool of threads is instead appropriate. Here is a more reasonable comparison (from my reddit post in response to people citing this SO post).

import geventfrom gevent import socket as gsockimport socket as sockimport threadingfrom datetime import datetimedef timeit(fn, URLS):    t1 = datetime.now()    fn()    t2 = datetime.now()    print(        "%s / %d hostnames, %s seconds" % (            fn.__name__,            len(URLS),            (t2 - t1).total_seconds()        )    )def run_gevent_without_a_timeout():    ip_numbers = []    def greenlet(domain_name):        ip_numbers.append(gsock.gethostbyname(domain_name))    jobs = [gevent.spawn(greenlet, domain_name) for domain_name in URLS]    gevent.joinall(jobs)    assert len(ip_numbers) == len(URLS)def run_threads_correctly():    ip_numbers = []    def process():        while queue:            try:                domain_name = queue.pop()            except IndexError:                pass            else:                ip_numbers.append(sock.gethostbyname(domain_name))    threads = [threading.Thread(target=process) for i in range(50)]    queue = list(URLS)    for t in threads:        t.start()    for t in threads:        t.join()    assert len(ip_numbers) == len(URLS)URLS_base = ['www.google.com', 'www.example.com', 'www.python.org',             'www.yahoo.com', 'www.ubc.ca', 'www.wikipedia.org']for NUM in (5, 50, 500, 5000, 10000):    URLS = []    for _ in range(NUM):        for url in URLS_base:            URLS.append(url)    print("--------------------")    timeit(run_gevent_without_a_timeout, URLS)    timeit(run_threads_correctly, URLS)

Here are some results:

--------------------run_gevent_without_a_timeout / 30 hostnames, 0.044888 secondsrun_threads_correctly / 30 hostnames, 0.019389 seconds--------------------run_gevent_without_a_timeout / 300 hostnames, 0.186045 secondsrun_threads_correctly / 300 hostnames, 0.153808 seconds--------------------run_gevent_without_a_timeout / 3000 hostnames, 1.834089 secondsrun_threads_correctly / 3000 hostnames, 1.569523 seconds--------------------run_gevent_without_a_timeout / 30000 hostnames, 19.030259 secondsrun_threads_correctly / 30000 hostnames, 15.163603 seconds--------------------run_gevent_without_a_timeout / 60000 hostnames, 35.770358 secondsrun_threads_correctly / 60000 hostnames, 29.864083 seconds

the misunderstanding everyone has about non-blocking IO with Python is the belief that the Python interpreter can attend to the work of retrieving results from sockets at a large scale faster than the network connections themselves can return IO. While this is certainly true in some cases, it is not true nearly as often as people think, because the Python interpreter is really, really slow. In my blog post here, I illustrate some graphical profiles that show that for even very simple things, if you are dealing with crisp and fast network access to things like databases or DNS servers, those services can come back a lot faster than the Python code can attend to many thousands of those connections.


Taking @Max's answer and adding some relevance to it for scaling, you can see the difference. I achieved this by changing the URLs to be filled as follows:

URLS_base = ['www.google.com', 'www.example.com', 'www.python.org', 'www.yahoo.com', 'www.ubc.ca', 'www.wikipedia.org']URLS = []for _ in range(10000):    for url in URLS_base:        URLS.append(url)

I had to drop out the multiprocess version as it fell before I had 500; but at 10,000 iterations:

Using gevent it took: 3.756914-----------Using multi-threading it took: 15.797028

So you can see there is some significant difference in I/O using gevent