Can I run Keras model on gpu? Can I run Keras model on gpu? python python

Can I run Keras model on gpu?


Yes you can run keras models on GPU. Few things you will have to check first.

  1. your system has GPU (Nvidia. As AMD doesn't work yet)
  2. You have installed the GPU version of tensorflow
  3. You have installed CUDA installation instructions
  4. Verify that tensorflow is running with GPU check if GPU is working

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

for TF > v2.0

sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))

(Thanks @nbro and @Ferro for pointing this out in the comments)

OR

from tensorflow.python.client import device_libprint(device_lib.list_local_devices())

output will be something like this:

[  name: "/cpu:0"device_type: "CPU",  name: "/gpu:0"device_type: "GPU"]

Once all this is done your model will run on GPU:

To Check if keras(>=2.1.1) is using GPU:

from keras import backend as KK.tensorflow_backend._get_available_gpus()

All the best.


Sure. I suppose that you have already installed TensorFlow for GPU.

You need to add the following block after importing keras. I am working on a machine which have 56 core cpu, and a gpu.

import kerasimport tensorflow as tfconfig = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 56} ) sess = tf.Session(config=config) keras.backend.set_session(sess)

Of course, this usage enforces my machines maximum limits. You can decrease cpu and gpu consumption values.


2.0 Compatible Answer: While above mentioned answer explain in detail on how to use GPU on Keras Model, I want to explain how it can be done for Tensorflow Version 2.0.

To know how many GPUs are available, we can use the below code:

print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

To find out which devices your operations and tensors are assigned to, put tf.debugging.set_log_device_placement(True) as the first statement of your program.

Enabling device placement logging causes any Tensor allocations or operations to be printed. For example, running the below code:

tf.debugging.set_log_device_placement(True)# Create some tensorsa = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])c = tf.matmul(a, b)print(c)

gives the Output shown below:

Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32)

For more information, refer this link