Incremental writes to hdf5 with h5py Incremental writes to hdf5 with h5py python python

Incremental writes to hdf5 with h5py


Per the FAQ, you can expand the dataset using dset.resize. For example,

import osimport h5pyimport numpy as nppath = '/tmp/out.h5'os.remove(path)with h5py.File(path, "a") as f:    dset = f.create_dataset('voltage284', (10**5,), maxshape=(None,),                            dtype='i8', chunks=(10**4,))    dset[:] = np.random.random(dset.shape)            print(dset.shape)    # (100000,)    for i in range(3):        dset.resize(dset.shape[0]+10**4, axis=0)           dset[-10**4:] = np.random.random(10**4)        print(dset.shape)        # (110000,)        # (120000,)        # (130000,)


As @unutbu pointed out, dset.resize is an excellent option. It may be work while to look at pandas and its HDF5 support which may be useful given your workflow. It sounds like HDF5 is a reasonable choice given your needs but it is possible that your problem may be expressed better using an additional layer on top.

One big thing to consider is the orientation of the data. If you're primarily interested in reads, and you are primarily fetching data by column, then it sounds like you may want to transpose the data such that the reads can happen by row as HDF5 stores in row-major order.