Clearing Tensorflow GPU memory after model execution
A git issue from June 2016 (https://github.com/tensorflow/tensorflow/issues/1727) indicates that there is the following problem:
currently the Allocator in the GPUDevice belongs to the ProcessState, which is essentially a global singleton. The first session using GPU initializes it, and frees itself when the process shuts down.
Thus the only workaround would be to use processes and shut them down after the computation.
Example Code:
import tensorflow as tfimport multiprocessingimport numpy as npdef run_tensorflow(): n_input = 10000 n_classes = 1000 # Create model def multilayer_perceptron(x, weight): # Hidden layer with RELU activation layer_1 = tf.matmul(x, weight) return layer_1 # Store layers weight & bias weights = tf.Variable(tf.random_normal([n_input, n_classes])) x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) pred = multilayer_perceptron(x, weights) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for i in range(100): batch_x = np.random.rand(10, 10000) batch_y = np.random.rand(10, 1000) sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) print "finished doing stuff with tensorflow!"if __name__ == "__main__": # option 1: execute code with extra process p = multiprocessing.Process(target=run_tensorflow) p.start() p.join() # wait until user presses enter key raw_input() # option 2: just execute the function run_tensorflow() # wait until user presses enter key raw_input()
So if you would call the function run_tensorflow()
within a process you created and shut the process down (option 1), the memory is freed. If you just run run_tensorflow()
(option 2) the memory is not freed after the function call.
You can use numba library to release all the gpu memory
pip install numba
from numba import cuda device = cuda.get_current_device()device.reset()
This will release all the memory
I use numba to releae gpu, with tensorflow I can not find a effect method.
import tensorflow as tffrom numba import cudaa = tf.constant([1.0,2.0,3.0],shape=[3],name='a')b = tf.constant([1.0,2.0,3.0],shape=[3],name='b')with tf.device('/gpu:1'): c = a+bTF_CONFIG = tf.ConfigProto(gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.1), allow_soft_placement=True)sess = tf.Session(config=TF_CONFIG)sess.run(tf.global_variables_initializer())i=1while(i<1000): i=i+1 print(sess.run(c))sess.close() # if don't use numba,the gpu can't be releasedcuda.select_device(1)cuda.close()with tf.device('/gpu:1'): c = a+bTF_CONFIG = tf.ConfigProto(gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.5), allow_soft_placement=True)sess = tf.Session(config=TF_CONFIG)sess.run(tf.global_variables_initializer())while(1): print(sess.run(c))