Python 3, read/write compressed json objects from/to gzip file Python 3, read/write compressed json objects from/to gzip file python-3.x python-3.x

Python 3, read/write compressed json objects from/to gzip file


You have four steps of transformation here.

  1. a Python data structure (nested dicts, lists, strings, numbers, booleans)
  2. a Python string containing a serialized representation of that data structure ("JSON")
  3. a list of bytes containing a representation of that string ("UTF-8")
  4. a list of bytes containing a - shorter - representation of that previous byte list ("gzip")

So let's take these steps one by one.

import gzipimport jsondata = []for i in range(N):    uid = "whatever%i" % i    dv = [1, 2, 3]    data.append({        'what': uid,        'where': dv    })                                           # 1. datajson_str = json.dumps(data) + "\n"               # 2. string (i.e. JSON)json_bytes = json_str.encode('utf-8')            # 3. bytes (i.e. UTF-8)with gzip.open(jsonfilename, 'w') as fout:       # 4. fewer bytes (i.e. gzip)    fout.write(json_bytes)                       

Note that adding "\n" is completely superfluous here. It does not break anything, but beyond that it has no use. I've added that only because you have it in your code sample.

Reading works exactly the other way around:

with gzip.open(jsonfilename, 'r') as fin:        # 4. gzip    json_bytes = fin.read()                      # 3. bytes (i.e. UTF-8)json_str = json_bytes.decode('utf-8')            # 2. string (i.e. JSON)data = json.loads(json_str)                      # 1. dataprint(data)

Of course the steps can be combined:

with gzip.open(jsonfilename, 'w') as fout:    fout.write(json.dumps(data).encode('utf-8'))                       

and

with gzip.open(jsonfilename, 'r') as fin:    data = json.loads(fin.read().decode('utf-8'))


The solution mentioned here (thanks, @Rafe) has a big advantage: as encoding is done on-the-fly, you don't create two complete, intermediate string objects of the generated json. With big objects, this saves memory.

with gzip.open(jsonfilename, 'wt', encoding='UTF-8') as zipfile:    json.dump(data, zipfile)

In addition, reading and decoding is simple as well:

with gzip.open(jsonfilename, 'rt', encoding='UTF-8') as zipfile:    my_object = json.load(zipfile)