Tensorflow REstart queue runners: different train and test queue Tensorflow REstart queue runners: different train and test queue multithreading multithreading

Tensorflow REstart queue runners: different train and test queue


looks like the cooordinators has some linkage with the current thread context. within the same thread context though you are having different graphs, sessions and cooridanators, if one of the coordinator terminates this may cause other coordinators to exit forcibly. i tried to avoid this problem with seperate threads for training and evaluation. hope this helps