Reusing Tensorflow session in multiple threads causes crash Reusing Tensorflow session in multiple threads causes crash multithreading multithreading

Reusing Tensorflow session in multiple threads causes crash


Not only can the Session be the current thread default, but also the graph.While you pass in the session and call run on it, the default graph will be a different one.

You can ammend your thread_function like this to make it work:

def thread_function(sess, i):    with sess.graph.as_default():        inn = [1.3, 4.5]        A = tf.placeholder(dtype=float, shape=(None), name="input")        P = tf.Print(A, [A])        Q = tf.add(A, P)        sess.run(Q, feed_dict={A: inn})

However, I wouldn't hope for any significant speedup. Python threading isn't what it means in some other languages, only certain operations, like io, would run in parallel. For CPU heavy operations it's not very useful. Multiprocessing can run code truely in parallel, but you wouldn't share the same session.


Extending de1's answer with another resource on github:tensorflow/tensorflow#28287 (comment)

The following resolved tf's multithreading compatibility for me:

# on thread 1session = tf.Session(graph=tf.Graph())with session.graph.as_default():    k.backend.set_session(session)    model = k.models.load_model(filepath)# on thread 2with session.graph.as_default():    k.backend.set_session(session)    model.predict(x)

This keeps both the Session and the Graph for other threads.
The model is loaded in their "context" (instead of the default ones) and kept for other threads to use.
(By default the model is loaded to the default Session and the default Graph)
Another plus is that they're kept in the same object - easier to handle.