How to handle large amouts of data in tensorflow? How to handle large amouts of data in tensorflow? numpy numpy

How to handle large amouts of data in tensorflow?


The utilities for npy files indeed allocate the whole array in memory. I'd recommend you to convert all of your numpy arrays to TFRecords format and use these files in training. This is one of the most efficient ways to read large dataset in tensorflow.

Convert to TFRecords

def array_to_tfrecords(X, y, output_file):  feature = {    'X': tf.train.Feature(float_list=tf.train.FloatList(value=X.flatten())),    'y': tf.train.Feature(float_list=tf.train.FloatList(value=y.flatten()))  }  example = tf.train.Example(features=tf.train.Features(feature=feature))  serialized = example.SerializeToString()  writer = tf.python_io.TFRecordWriter(output_file)  writer.write(serialized)  writer.close()

A complete example that deals with images can be found here.

Read TFRecordDataset

def parse_proto(example_proto):  features = {    'X': tf.FixedLenFeature((345,), tf.float32),    'y': tf.FixedLenFeature((5,), tf.float32),  }  parsed_features = tf.parse_single_example(example_proto, features)  return parsed_features['X'], parsed_features['y']def read_tfrecords(file_names=("file1.tfrecord", "file2.tfrecord", "file3.tfrecord"),                   buffer_size=10000,                   batch_size=100):  dataset = tf.contrib.data.TFRecordDataset(file_names)  dataset = dataset.map(parse_proto)  dataset = dataset.shuffle(buffer_size)  dataset = dataset.repeat()  dataset = dataset.batch(batch_size)  return tf.contrib.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)

The data manual can be found here.