When it comes to mapreduce how are the Accumulo tablets mapped to an HDFS block When it comes to mapreduce how are the Accumulo tablets mapped to an HDFS block hadoop hadoop

When it comes to mapreduce how are the Accumulo tablets mapped to an HDFS block


To answer your questions directly:

How are the tablets mapped to a Datanode or HDFS block? Obviously, One tablet is split into multiple HDFS blocks (8 in this case) so would they be stored on the same or different datanode(s) or does it not matter?

Tablets are stored in blocks like all other files in HDFS. You will typically see all blocks for a single file on at least one data node (this isn't always the case, but seems to mostly hold true when i've looked at block locations for larger files)

In the example above, would all data about RowC (or A or B) go onto the same HDFS block or different HDFS blocks?

Depends on the block size for your tablets (dfs.block.size or if configured the Accumulo property table.file.blocksize). If the block size is the same size as the tablet size, then obviously they will be in the same HDFS block. Otherwise if the block size is smaller than the tablet size, then it's pot luck as to whether they are in the same block or not.

When executing a map reduce job how many mappers would I get? (one per hdfs block? or per tablet? or per server?)

This depends on the ranges you give InputFormatBase.setRanges(Configuration, Collection<Ranges>).

If you scan the entire table (-inf -> +inf), then you'll get a number of mappers equal to the number of tablets (caveated by disableAutoAdjustRanges). If you define specific ranges, you'll get a different behavior depending on whether you've called InputFormatBase.disableAutoAdjustRanges(Configuration) or not:

  1. If you have called this method then you'll get one mapper per range defined. Importantly, if you have a range that starts in one tablet and ends in another, you'll get one mapper to process that entire range
  2. If you don't call this method, and you have a range that spans over tablets, then you'll get one mapper for each tablet the range covers


For writing to Accumulo (data ingest), it makes sense to run MapReduce jobs, where the mapper inputs are your input files on HDFS. You would basically follow this example from Accumulo documentation:

http://accumulo.apache.org/1.4/examples/mapred.html

(Section IV of this paper provides some more background on techniques for ingesting data into Accumulo: http://ieee-hpec.org/2012/index_htm_files/byun.pdf)

For reading from Accumulo (data query), I would not use MapReduce. Accumulo/Zookeeper will automatically distribute your query across tablet servers. If you're using rows as atomic records, use (or extend) the WholeRowIterator and launch a Scanner (or BatchScanner) on the range of rows you're interested in. The Scanner will run in parallel across your tablet servers. You don't really want to access Accumulo data directly from HDFS or MapReduce.

Here's some example code to help get your started:

//some of the classes you'll need (in no particular order)...import org.apache.accumulo.core.client.Connector;import org.apache.accumulo.core.client.Instance;import org.apache.accumulo.core.client.ZooKeeperInstance;import org.apache.accumulo.core.Constants;import org.apache.accumulo.core.client.Scanner;import org.apache.accumulo.core.client.IteratorSetting;import org.apache.accumulo.core.data.Key;import org.apache.accumulo.core.data.Range;import org.apache.accumulo.core.data.Value;import org.apache.hadoop.io.Text;//Accumulo client code...//Accumulo connectionInstance instance = new ZooKeeperInstance( /* put your installation info here */ );Connector connector = instance.getConnector(username, password);//setup a Scanner or BatchScannerScanner scanner = connector.createScanner(tableName, Constants.NO_AUTHS);Range range = new Range(new Text("rowA"), new Text("rowB"));scanner.setRange(range);//use a WholeRowIterator to keep rows atomicIteratorSetting itSettings = new IteratorSetting(1, WholeRowIterator.class);scanner.addScanIterator(itSettings);//now read some data!for (Entry<Key, Value> entry : scanner) {    SortedMap<Key,Value> wholeRow = WholeRowIterator.decodeRow(entry.getKey(), entry.getValue());    //do something with your data!}