Running multiple tensorflow sessions subsequently Running multiple tensorflow sessions subsequently flask flask

Running multiple tensorflow sessions subsequently


The thing is that Tensorflow allocates the memory for the process not the Session, closing the session is not enough (even if you put the allow_growth option).

The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation.

There is an issue on TF github with some solutions , you could for example decorate your run method with the RunAsCUDASubprocess proposed in the thread.


This error means that you are trying to fit into the GPU something bigger than the memory you have available. Maybe you can reduce the number of parameters somewhere in your model in order for it to be lighter?