How do you kill Futures once they have started? How do you kill Futures once they have started? python python

How do you kill Futures once they have started?


It's kind of painful. Essentially, your worker threads have to be finished before your main thread can exit. You cannot exit unless they do. The typical workaround is to have some global state, that each thread can check to determine if they should do more work or not.

Here's the quote explaining why. In essence, if threads exited when the interpreter does, bad things could happen.

Here's a working example. Note that C-c takes at most 1 sec to propagate because the sleep duration of the child thread.

#!/usr/bin/env pythonfrom __future__ import print_functionimport concurrent.futuresimport timeimport sysquit = Falsedef wait_a_bit(name):    while not quit:        print("{n} is doing work...".format(n=name))        time.sleep(1)def setup():    executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)    future1 = executor.submit(wait_a_bit, "Jack")    future2 = executor.submit(wait_a_bit, "Jill")    # main thread must be doing "work" to be able to catch a Ctrl+C     # http://www.luke.maurits.id.au/blog/post/threads-and-signals-in-python.html    while (not (future1.done() and future2.done())):        time.sleep(1)if __name__ == "__main__":    try:        setup()    except KeyboardInterrupt:        quit = True


I encountered this, but the issue I had was that many futures (10's of thousands) would be waiting to run and just pressing Ctrl-C left them waiting, not actually exiting. I was using concurrent.futures.wait to run a progress loop and needed to add a try ... except KeyboardInterrupt to handle cancelling unfinished Futures.

POLL_INTERVAL = 5with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_WORKERS) as pool:    futures = [pool.submit(do_work, arg) for arg in large_set_to_do_work_over]    # next line returns instantly    done, not_done = concurrent.futures.wait(futures, timeout=0)    try:        while not_done:            # next line 'sleeps' this main thread, letting the thread pool run            freshly_done, not_done = concurrent.futures.wait(not_done, timeout=POLL_INTERVAL)            done |= freshly_done            # more polling stats calculated here and printed every POLL_INTERVAL seconds...    except KeyboardInterrupt:        # only futures that are not done will prevent exiting        for future in not_done:            # cancel() returns False if it's already done or currently running,            # and True if was able to cancel it; we don't need that return value            _ = future.cancel()         # wait for running futures that the above for loop couldn't cancel (note timeout)         _ = concurrent.futures.wait(not_done, timeout=None)

If you're not interested in keeping exact track of what got done and what didn't (i.e. don't want a progress loop), you can replace the first wait call (the one with timeout=0) with not_done = futures and still leave the while not_done: logic.

The for future in not_done: cancel loop can probably behave differently based on that return value (or be written as a comprehension), but waiting for futures that are done or canceled isn't really waiting - it returns instantly. The last wait with timeout=None ensures that pool's running jobs really do finish.

Again, this only works correctly if the do_work that's being called actually, eventually returns within a reasonable amount of time. That was fine for me - in fact, I want to be sure that if do_work gets started, it runs to completion. If do_work is 'endless' then you'll need something like cdosborn's answer that uses a variable visible to all the threads, signaling them to stop themselves.