Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory python python

Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory


Try reducing batch_size attribute to a small number(like 1,2 or 3).Example:

train_generator = data_generator.flow_from_directory(    'path_to_the_training_set',    target_size = (IMG_SIZE,IMG_SIZE),    batch_size = 2,    class_mode = 'categorical'    )


I was having the same problem while running Tensorflow container with Docker and Jupyter notebook. I was able to fix this problem by increasing the container memory.

On Mac OS, you can easily do this from:

       Docker Icon > Preferences >  Advanced > Memory

Drag the scrollbar to maximum (e.g. 4GB). Apply and it will restart the Docker engine.

Now run your tensor flow container again.

It was handy to use the docker stats command in a separate terminalIt shows the container memory usage in realtime, and you can see how much memory consumption is growing:

CONTAINER ID   NAME   CPU %   MEM USAGE / LIMIT     MEM %    NET I/O             BLOCK I/O           PIDS3170c0b402cc   mytf   0.04%   588.6MiB / 3.855GiB   14.91%   13.1MB / 3.06MB     214MB / 3.13MB      21


Alternatively, you can set the environment variable TF_CPP_MIN_LOG_LEVEL=2 to filter out info and warning messages. I found that on this github issue where they complain about the same output. To do so within python, you can use the solution from here:

import osimport tensorflow as tfos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

You can even turn it on and off at will with this. I test for the maximum possible batch size before running my code, and I can disable warnings and errors while doing this.