What is the best beetween multiple small h5 files or one huge? What is the best beetween multiple small h5 files or one huge? multithreading multithreading

What is the best beetween multiple small h5 files or one huge?


I would go for multiple files if I were you (but read till the end).

Intuitively, you could load at least some files into memory speeding the process a little bit (it is unlikely you would able to do so with 20GB, if you are, than you definitely should as RAM access is much faster).

You could cache those examples (inside custom torch.utils.data.Dataset instance) during the first past and retrieve cached examples (say in list or other more memory-efficient data structure with better cache-locality preferably) instead of reading from disk (similar approach to the one in Tensorflow's tf.data.Dataset object and it's cache method).

On the other hand, this approach is more cumbersome and harder to implement correctly,though if you are only reading the file with multiple threads you should be fine and there shouldn't be any locks on this operation.

Remember to measure your approach with pytorch's profiler (torch.utils.bottleneck) to pinpoint exact problems and verify solutions.