Import data from HDFS to HBase (cdh3u2) Import data from HDFS to HBase (cdh3u2) hadoop hadoop

Import data from HDFS to HBase (cdh3u2)


I like using Apache Pig for ingest into HBase because it is simple, straightforward, and flexible.

Here is a Pig script that would do the job for you, after you have created the table and the column family. To create the table and the column family, you'll do:

$ hbase shell> create 'mydata', 'mycf'

Move the file to HDFS:

$ hadoop fs -put /home/file.txt /user/surendhar/file.txt

Then, write the pig script to store with HBaseStorage (you may have to look up how to set up and run Pig):

A = LOAD 'file.txt' USING PigStorage(',') as (strdata:chararray, intdata:long);STORE A INTO 'hbase://mydata'        USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(              'mycf:intdata');

Note that in the above script, the key is going to be strdata. If you want to create your own key from something, use a FOREACH statement to generate the key. HBaseStorage assumes that the first thing in the previous relation (A::strdata in this case) is the key.


Some other options would be:

  • Write a Java MapReduce job to do the same thing as above.
  • Interact directly with the HTable with the client and put in row-by-row. This should only be done with much smaller files.
  • Push the data up with the hbase shell using some sort of script (i.e., sed, perl, python) that transforms the lines of csv into shell put commands. Again, this should only be done if the number of records is small.

    $ cat /home/file.txt | transform.plput 'mydata', 'one', 'mycf:intdata', '1'put 'mydata', 'two', 'mycf:intdata', '2'put 'mydata', 'three', 'mycf:intdata', '3'$ cat /home/file.txt | transform.pl | hbase shell