appending to ORC file appending to ORC file hadoop hadoop

appending to ORC file


Update 2017

Yes now you can! Hive provides a new support for ACID, but you can append data to your table using Append Mode mode("append") with Spark

Below an example

Seq((10, 20)).toDF("a", "b").write.mode("overwrite").saveAsTable("tab1")Seq((20, 30)).toDF("a", "b").write.mode("append").saveAsTable("tab1")sql("select * from tab1").show

Or a more complete exmple with ORC here; below an extract:

val command = spark.read.format("jdbc").option("url" .... ).load()command.write.mode("append").format("orc").option("orc.compression","gzip").save("command.orc")


No, you cannot append directly to an ORC file. Nor to a Parquet file. Nor to any columnar format with a complex internal structure with metadata interleaved with data.

Quoting the official "Apache Parquet" site...

Metadata is written after the data to allow for single pass writing.

Then quoting the official "Apache ORC" site...

Since HDFS does not support changing the data in a file after it is written, ORC stores the top level index at the end of the file (...) The file’s tail consists of 3 parts; the file metadata, file footer and postscript.

Well, technically, nowadays you can append to an HDFS file; you can even truncate it. But these tricks are only useful for some edge cases (e.g. Flume feeding messages into an HDFS "log file", micro-batch-wise, with fflush from time to time).

For Hive transaction support they use a different trick: creating a new ORC file on each transaction (i.e. micro-batch) with periodic compaction jobs running in the background, à la HBase.


Yes this is possible through Hive in which you can basically 'concatenate' newer data. From hive official documentation https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-WhatisACIDandwhyshouldyouuseit?