A3C in Tensorflow - Should I use threading or the distributed Tensorflow API A3C in Tensorflow - Should I use threading or the distributed Tensorflow API multithreading multithreading

A3C in Tensorflow - Should I use threading or the distributed Tensorflow API


After implementing both, in the end I found using threading simpler than the distributed tensorflow API, however it also runs slower. The more CPU cores you use, the faster distributed tensorflow becomes compared to threads.

However this only holds for asynchronous training. If the available CPU cores are limited and you want to make use of a GPU, you might want to use synchronous training with multiple workers instead, like OpenAI does in their A2C implementation. There only the environments are parallelized (through multiprocessing) and tensorflow uses the GPU without any graph parallelization. OpenAI reported that their results were better with synchronous training than with A3C.

edit:

Here are some more details:

The problem with distributed tensorflow for A3C is that you need to call multiple tensorflow forward passes (to get the actions during the n steps) before you call the learning step. However since you learn asynchronously your network will change during the n steps by the other workers. So your policy will change during the n steps and the learning step will happen with wrong weights. Distributed tensorflow will not prevent that. Therefore you need a global and a local network in distributed tensorflow as well, making the implementation not easier than an implementation with threading (and for threading you don't have to learn how to make distributed tensorflow work). Runtime wise, on 8 CPU cores or less there will be no large difference.