Write pojo's to parquet file using reflection Write pojo's to parquet file using reflection hadoop hadoop

Write pojo's to parquet file using reflection


If you want to go through avro you have two options:

1) Let avro generate your pojos (see the tutorial here). The generated pojos extend SpecificRecord which can then be used with AvroParquetWriter.

2) Write the conversion from your pojo to GenericRecord yourself. You can do this either manually or a more generic solution would be to use reflection. However, I encountered difficulties with this approach when I tried to read the data. Based on the supplied schema avro found the pojo in the classpath and tried to instantiate a SpecificRecord instead of GenericRecord. Because of this reason I went with option 1.

Parquet also supports now writing pojo directly. Here is the pull request on parquet github page. However, I think this is not part of an official release yet. In another words, I did not find this code in maven.


DISCLAIMER: The following code was written when I was in a hurry. It is not efficient and future versions of parquet will surely fix this more directly. That being said, this is a lightweight inefficient approach to what you need. The strategy is POJO -> AVRO -> PARQUET

  1. POJO -> AVRO: Declare a schema via reflection. Declare writers and readers based on the schema. At the time of conversion write the object to byte stream and read it back as avro.
  2. AVRO -> Parquet: use the AvroParquetWriter included in the parquet-me project.

private static final Schema avroSchema = ReflectData.AllowNull.get().getSchema(YOURCLASS.class);private static final ReflectDatumWriter<YOURCLASS> reflectDatumWriter = new ReflectDatumWriter<>(avroSchema);private static final GenericDatumReader<Object> genericRecordReader = new GenericDatumReader<>(avroSchema);public GenericRecord toAvroGenericRecord() throws IOException {    ByteArrayOutputStream bytes = new ByteArrayOutputStream();    reflectDatumWriter.write(this, EncoderFactory.get().directBinaryEncoder(bytes, null));    return (GenericRecord) genericRecordReader.read(null, DecoderFactory.get().binaryDecoder(bytes.toByteArray(), null));}

One more thing: it seems the parquet writers are currently very strict about null fields. Make sure none of your fields are null before attempting to write to parquet


I wasn't able to find an existing solution, so I implemented it myself. Here is the link to the implementation: https://gist.github.com/alexeygrigorev/eab72e40c6051e0163a6693054906d66

In short, it does the following:

  • uses reflection to get Avro schema from the pojo
  • using the schema and reflection it converts pojos to GenericRecord objects
  • reflection is applied recursively if the pojo contains other pojos or list of pojos