Chapter 1. Apache HBase Operational Management

Table of Contents

1.1. HBase Tools and Utilities
1.1.1. Health Checker
1.1.2. Driver
1.1.3. HBase hbck
1.1.4. HFile Tool
1.1.5. WAL Tools
1.1.6. Compression Tool
1.1.7. CopyTable
1.1.8. Export
1.1.9. Import
1.1.10. ImportTsv
1.1.11. CompleteBulkLoad
1.1.12. WALPlayer
1.1.13. RowCounter and CellCounter
1.1.14. mlockall
1.1.15. Offline Compaction Tool
1.2. Region Management
1.2.1. Major Compaction
1.2.2. Merge
1.3. Node Management
1.3.1. Node Decommission
1.3.2. Rolling Restart
1.3.3. Adding a New Node
1.4. HBase Metrics
1.4.1. Metric Setup
1.4.2. Warning To Ganglia Users
1.4.3. Most Important RegionServer Metrics
1.4.4. Other RegionServer Metrics
1.5. HBase Monitoring
1.5.1. Overview
1.5.2. Slow Query Log
1.6. Cluster Replication
1.7. HBase Backup
1.7.1. Full Shutdown Backup
1.7.2. Live Cluster Backup - Replication
1.7.3. Live Cluster Backup - CopyTable
1.7.4. Live Cluster Backup - Export
1.8. HBase Snapshots
1.8.1. Configuration
1.8.2. Take a Snapshot
1.8.3. Listing Snapshots
1.8.4. Deleting Snapshots
1.8.5. Clone a table from snapshot
1.8.6. Restore a snapshot
1.8.7. Snapshots operations and ACLs
1.8.8. Export to another cluster
1.9. Capacity Planning
1.9.1. Storage
1.9.2. Regions
1.10. Table Rename
This chapter will cover operational tools and practices required of a running Apache HBase cluster. The subject of operations is related to the topics of ???, ???, and ??? but is a distinct topic in itself.

1.1. HBase Tools and Utilities

Here we list HBase tools for administration, analysis, fixup, and debugging.

1.1.1. Health Checker

You can configure HBase to run a script on a period and if it fails N times (configurable), have the server exit. See HBASE-7351 Periodic health check script for configurations and detail.

1.1.2. Driver

There is a Driver class that is executed by the HBase jar can be used to invoke frequently accessed utilities. For example,

HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar

... will return...

An example program must be given as the first argument.
Valid program names are:
  completebulkload: Complete a bulk data load.
  copytable: Export a table from local cluster to peer cluster
  export: Write table data to HDFS.
  import: Import data written by Export.
  importtsv: Import data in TSV format.
  rowcounter: Count rows in HBase table
  verifyrep: Compare the data from tables in two different clusters. WARNING: It doesn't work for incrementColumnValues'd cells since the timestamp is chan

... for allowable program names.

1.1.3. HBase hbck

An fsck for your HBase install

To run hbck against your HBase cluster run

$ ./bin/hbase hbck

At the end of the commands output it prints OK or INCONSISTENCY. If your cluster reports inconsistencies, pass -details to see more detail emitted. If inconsistencies, run hbck a few times because the inconsistency may be transient (e.g. cluster is starting up or a region is splitting). Passing -fix may correct the inconsistency (This latter is an experimental feature).

For more information, see ???.

1.1.4. HFile Tool

See ???.

1.1.5. WAL Tools

1.1.5.1. FSHLog tool

The main method on FSHLog offers manual split and dump facilities. Pass it WALs or the product of a split, the content of the recovered.edits. directory.

You can get a textual dump of a WAL file content by doing the following:

 $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --dump hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012 

The return code will be non-zero if issues with the file so you can test wholesomeness of file by redirecting STDOUT to /dev/null and testing the program return.

Similarly you can force a split of a log file directory by doing:

 $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --split hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/
1.1.5.1.1. HLogPrettyPrinter

HLogPrettyPrinter is a tool with configurable options to print the contents of an HLog.

1.1.6. Compression Tool

See ???.

1.1.7. CopyTable

CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The usage is as follows:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] tablename

Options:

  • starttime Beginning of the time range. Without endtime means starttime to forever.
  • endtime End of the time range. Without endtime means starttime to forever.
  • versions Number of cell versions to copy.
  • new.name New table's name.
  • peer.adr Address of the peer cluster given in the format hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
  • families Comma-separated list of ColumnFamilies to copy.
  • all.cells Also copy delete markers and uncollected deleted cells (advanced option).

Args:

  • tablename Name of table to copy.

Example of copying 'TestTable' to a cluster that uses replication for a 1 hour window:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
--starttime=1265875194289 --endtime=1265878794289
--peer.adr=server1,server2,server3:2181:/hbase TestTable

Scanner Caching

Caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

See Jonathan Hsieh's Online HBase Backups with CopyTable blog post for more on CopyTable.

1.1.8. Export

Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]

Note: caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

1.1.9. Import

Import is a utility that will load data that has been exported back into HBase. Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>

1.1.10. ImportTsv

ImportTsv is a utility that will load data in TSV format into HBase. It has two distinct usages: loading data from TSV format in HDFS into HBase via Puts, and preparing StoreFiles to be loaded via the completebulkload.

To load data via Puts (i.e., non-bulk loading):

$ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c <tablename> <hdfs-inputdir>

To generate StoreFiles for bulk-loading:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c -Dimporttsv.bulk.output=hdfs://storefile-outputdir <tablename> <hdfs-data-inputdir>

These generated StoreFiles can be loaded into HBase via Section 1.1.11, “CompleteBulkLoad”.

1.1.10.1. ImportTsv Options

Running ImportTsv with no arguments prints brief usage information:
Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir>

Imports the given input directory of TSV data into the specified table.

The column names of the TSV data must be specified using the -Dimporttsv.columns
option. This option takes the form of comma-separated column names, where each
column name is either a simple column family, or a columnfamily:qualifier. The special
column name HBASE_ROW_KEY is used to designate that this column should be used
as the row key for each imported record. You must specify exactly one column
to be the row key, and you must specify a column name for every column that exists in the
input data.

By default importtsv will load data directly into HBase. To instead generate
HFiles of data to prepare for a bulk data load, pass the option:
  -Dimporttsv.bulk.output=/path/for/output
  Note: the target table will be created with default column family descriptors if it does not already exist.

Other options that may be specified with -D include:
  -Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line
  '-Dimporttsv.separator=|' - eg separate on pipes instead of tabs
  -Dimporttsv.timestamp=currentTimeAsLong - use the specified timestamp for the import
  -Dimporttsv.mapper.class=my.Mapper - A user-defined Mapper to use instead of org.apache.hadoop.hbase.mapreduce.TsvImporterMapper

1.1.10.2. ImportTsv Example

For example, assume that we are loading data into a table called 'datatsv' with a ColumnFamily called 'd' with two columns "c1" and "c2".

Assume that an input file exists as follows:

row1	c1	c2
row2	c1	c2
row3	c1	c2
row4	c1	c2
row5	c1	c2
row6	c1	c2
row7	c1	c2
row8	c1	c2
row9	c1	c2
row10	c1	c2

For ImportTsv to use this imput file, the command line needs to look like this:

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
 

... and in this example the first column is the rowkey, which is why the HBASE_ROW_KEY is used. The second and third columns in the file will be imported as "d:c1" and "d:c2", respectively.

1.1.10.3. ImportTsv Warning

If you have preparing a lot of data for bulk loading, make sure the target HBase table is pre-split appropriately.

1.1.10.4. See Also

For more information about bulk-loading HFiles into HBase, see ???

1.1.11. CompleteBulkLoad

The completebulkload utility will move generated StoreFiles into an HBase table. This utility is often used in conjunction with output from Section 1.1.10, “ImportTsv”.

There are two ways to invoke this utility, with explicit classname and via the driver:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles <hdfs://storefileoutput> <tablename>

.. and via the Driver..

HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar completebulkload <hdfs://storefileoutput> <tablename>

1.1.11.1. CompleteBulkLoad Warning

Data generated via MapReduce is often created with file permissions that are not compatible with the running HBase process. Assuming you're running HDFS with permissions enabled, those permissions will need to be updated before you run CompleteBulkLoad.

For more information about bulk-loading HFiles into HBase, see ???.

1.1.12. WALPlayer

WALPlayer is a utility to replay WAL files into HBase.

The WAL can be replayed for a set of tables or all tables, and a timerange can be provided (in milliseconds). The WAL is filtered to this set of tables. The output can optionally be mapped to another set of tables.

WALPlayer can also generate HFiles for later bulk importing, in that case only a single table and no mapping can be specified.

Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.WALPlayer [options] <wal inputdir> <tables> [<tableMappings>]>

For example:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /backuplogdir oldTable1,oldTable2 newTable1,newTable2

WALPlayer, by default, runs as a mapreduce job. To NOT run WALPlayer as a mapreduce job on your cluster, force it to run all in the local process by adding the flags -Dmapred.job.tracker=local on the command line.

1.1.13. RowCounter and CellCounter

RowCounter is a mapreduce job to count all the rows of a table. This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency. It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit.

$ bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename> [<column1> <column2>...]

Note: caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

HBase ships another diagnostic mapreduce job called CellCounter. Like RowCounter, it gathers more fine-grained statistics about your table. The statistics gathered by RowCounter are more fine-grained and include:

  • Total number of rows in the table.
  • Total number of CFs across all rows.
  • Total qualifiers across all rows.
  • Total occurrence of each CF.
  • Total occurrence of each qualifier.
  • Total number of versions of each qualifier.

The program allows you to limit the scope of the run. Provide a row regex or prefix to limit the rows to analyze. Use hbase.mapreduce.scan.column.family to specify scanning a single column family.

$ bin/hbase org.apache.hadoop.hbase.mapreduce.CellCounter <tablename> <outputDir> [regex or prefix]

Note: just like RowCounter, caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

1.1.14. mlockall

It is possible to optionally pin your servers in physical memory making them less likely to be swapped out in oversubscribed environments by having the servers call mlockall on startup. See HBASE-4391 Add ability to start RS as root and call mlockall for how to build the optional library and have it run on startup.

1.1.15. Offline Compaction Tool

See the usage for the Compaction Tool. Run it like this ./bin/hbase org.apache.hadoop.hbase.regionserver.CompactionTool

1.2. Region Management

1.2.1. Major Compaction

Major compactions can be requested via the HBase shell or HBaseAdmin.majorCompact.

Note: major compactions do NOT do region merges. See ??? for more information about compactions.

1.2.2. Merge

Merge is a utility that can merge adjoining regions in the same table (see org.apache.hadoop.hbase.util.Merge).

$ bin/hbase org.apache.hadoop.hbase.util.Merge <tablename> <region1> <region2>

If you feel you have too many regions and want to consolidate them, Merge is the utility you need. Merge must run be done when the cluster is down. See the O'Reilly HBase Book for an example of usage.

You will need to pass 3 parameters to this application. The first one is the table name. The second one is the fully qualified name of the first region to merge, like "table_name,\x0A,1342956111995.7cef47f192318ba7ccc75b1bbf27a82b.". The third one is the fully qualified name for the second region to merge.

Additionally, there is a Ruby script attached to HBASE-1621 for region merging.

1.3. Node Management

1.3.1. Node Decommission

You can stop an individual RegionServer by running the following script in the HBase directory on the particular node:

$ ./bin/hbase-daemon.sh stop regionserver

The RegionServer will first close all regions and then shut itself down. On shutdown, the RegionServer's ephemeral node in ZooKeeper will expire. The master will notice the RegionServer gone and will treat it as a 'crashed' server; it will reassign the nodes the RegionServer was carrying.

Disable the Load Balancer before Decommissioning a node

If the load balancer runs while a node is shutting down, then there could be contention between the Load Balancer and the Master's recovery of the just decommissioned RegionServer. Avoid any problems by disabling the balancer first. See Load Balancer below.

A downside to the above stop of a RegionServer is that regions could be offline for a good period of time. Regions are closed in order. If many regions on the server, the first region to close may not be back online until all regions close and after the master notices the RegionServer's znode gone. In Apache HBase 0.90.2, we added facility for having a node gradually shed its load and then shutdown itself down. Apache HBase 0.90.2 added the graceful_stop.sh script. Here is its usage:

$ ./bin/graceful_stop.sh
Usage: graceful_stop.sh [--config &conf-dir>] [--restart] [--reload] [--thrift] [--rest] &hostname>
 thrift      If we should stop/start thrift before/after the hbase stop/start
 rest        If we should stop/start rest before/after the hbase stop/start
 restart     If we should restart after graceful stop
 reload      Move offloaded regions back on to the stopped server
 debug       Move offloaded regions back on to the stopped server
 hostname    Hostname of server we are to stop

To decommission a loaded RegionServer, run the following:

$ ./bin/graceful_stop.sh HOSTNAME

where HOSTNAME is the host carrying the RegionServer you would decommission.

On HOSTNAME

The HOSTNAME passed to graceful_stop.sh must match the hostname that hbase is using to identify RegionServers. Check the list of RegionServers in the master UI for how HBase is referring to servers. Its usually hostname but can also be FQDN. Whatever HBase is using, this is what you should pass the graceful_stop.sh decommission script. If you pass IPs, the script is not yet smart enough to make a hostname (or FQDN) of it and so it will fail when it checks if server is currently running; the graceful unloading of regions will not run.

The graceful_stop.sh script will move the regions off the decommissioned RegionServer one at a time to minimize region churn. It will verify the region deployed in the new location before it will moves the next region and so on until the decommissioned server is carrying zero regions. At this point, the graceful_stop.sh tells the RegionServer stop. The master will at this point notice the RegionServer gone but all regions will have already been redeployed and because the RegionServer went down cleanly, there will be no WAL logs to split.

Load Balancer

It is assumed that the Region Load Balancer is disabled while the graceful_stop script runs (otherwise the balancer and the decommission script will end up fighting over region deployments). Use the shell to disable the balancer:

hbase(main):001:0> balance_switch false
true
0 row(s) in 0.3590 seconds

This turns the balancer OFF. To reenable, do:

hbase(main):001:0> balance_switch true
false
0 row(s) in 0.3590 seconds

The graceful_stop will check the balancer and if enabled, will turn it off before it goes to work. If it exits prematurely because of error, it will not have reset the balancer. Hence, it is better to manage the balancer apart from graceful_stop reenabling it after you are done w/ graceful_stop.

1.3.1.1. Decommissioning several Regions Servers concurrently

If you have a large cluster, you may want to decommission more than one machine at a time by gracefully stopping mutiple RegionServers concurrently. To gracefully drain multiple regionservers at the same time, RegionServers can be put into a "draining" state. This is done by marking a RegionServer as a draining node by creating an entry in ZooKeeper under the hbase_root/draining znode. This znode has format

name,port,startcode

just like the regionserver entries under hbase_root/rs znode.

Without this facility, decommissioning mulitple nodes may be non-optimal because regions that are being drained from one region server may be moved to other regionservers that are also draining. Marking RegionServers to be in the draining state prevents this from happening[1].

1.3.1.2. Bad or Failing Disk

It is good having ??? set if you have a decent number of disks per machine for the case where a disk plain dies. But usually disks do the "John Wayne" -- i.e. take a while to go down spewing errors in dmesg -- or for some reason, run much slower than their companions. In this case you want to decommission the disk. You have two options. You can decommission the datanode or, less disruptive in that only the bad disks data will be rereplicated, can stop the datanode, unmount the bad volume (You can't umount a volume while the datanode is using it), and then restart the datanode (presuming you have set dfs.datanode.failed.volumes.tolerated > 0). The regionserver will throw some errors in its logs as it recalibrates where to get its data from -- it will likely roll its WAL log too -- but in general but for some latency spikes, it should keep on chugging.

Short Circuit Reads

If you are doing short-circuit reads, you will have to move the regions off the regionserver before you stop the datanode; when short-circuiting reading, though chmod'd so regionserver cannot have access, because it already has the files open, it will be able to keep reading the file blocks from the bad disk even though the datanode is down. Move the regions back after you restart the datanode.

1.3.2. Rolling Restart

You can also ask this script to restart a RegionServer after the shutdown AND move its old regions back into place. The latter you might do to retain data locality. A primitive rolling restart might be effected by running something like the following:

$ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
            

Tail the output of /tmp/log.txt to follow the scripts progress. The above does RegionServers only. The script will also disable the load balancer before moving the regions. You'd need to do the master update separately. Do it before you run the above script. Here is a pseudo-script for how you might craft a rolling restart script:

  1. Untar your release, make sure of its configuration and then rsync it across the cluster. If this is 0.90.2, patch it with HBASE-3744 and HBASE-3756.

  2. Run hbck to ensure the cluster consistent

    $ ./bin/hbase hbck

    Effect repairs if inconsistent.

  3. Restart the Master:

    $ ./bin/hbase-daemon.sh stop master; ./bin/hbase-daemon.sh start master

  4. Run the graceful_stop.sh script per RegionServer. For example:

    $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
                

    If you are running thrift or rest servers on the RegionServer, pass --thrift or --rest options (See usage for graceful_stop.sh script).

  5. Restart the Master again. This will clear out dead servers list and reenable the balancer.

  6. Run hbck to ensure the cluster is consistent.

It is important to drain HBase regions slowly when restarting regionservers. Otherwise, multiple regions go offline simultaneously as they are re-assigned to other nodes. Depending on your usage patterns, this might not be desirable.

1.3.3. Adding a New Node

Adding a new regionserver in HBase is essentially free, you simply start it like this:

$ ./bin/hbase-daemon.sh start regionserver

and it will register itself with the master. Ideally you also started a DataNode on the same machine so that the RS can eventually start to have local files. If you rely on ssh to start your daemons, don't forget to add the new hostname in conf/regionservers on the master.

At this point the region server isn't serving data because no regions have moved to it yet. If the balancer is enabled, it will start moving regions to the new RS. On a small/medium cluster this can have a very adverse effect on latency as a lot of regions will be offline at the same time. It is thus recommended to disable the balancer the same way it's done when decommissioning a node and move the regions manually (or even better, using a script that moves them one by one).

The moved regions will all have 0% locality and won't have any blocks in cache so the region server will have to use the network to serve requests. Apart from resulting in higher latency, it may also be able to use all of your network card's capacity. For practical purposes, consider that a standard 1GigE NIC won't be able to read much more than 100MB/s. In this case, or if you are in a OLAP environment and require having locality, then it is recommended to major compact the moved regions.

1.4. HBase Metrics

1.4.1. Metric Setup

See Metrics for an introduction and how to enable Metrics emission. Still valid for HBase 0.94.x.

For HBase 0.95.x and up, see http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html

1.4.2. Warning To Ganglia Users

Warning to Ganglia Users: by default, HBase will emit a LOT of metrics per RegionServer which may swamp your installation. Options include either increasing Ganglia server capacity, or configuring HBase to emit fewer metrics.

1.4.3. Most Important RegionServer Metrics

1.4.3.1. blockCacheExpressCachingRatio (formerly blockCacheHitCachingRatio)

Block cache hit caching ratio (0 to 100). The cache-hit ratio for reads configured to look in the cache (i.e., cacheBlocks=true).

1.4.3.2. callQueueLength

Point in time length of the RegionServer call queue. If requests arrive faster than the RegionServer handlers can process them they will back up in the callQueue.

1.4.3.3. compactionQueueLength (formerly compactionQueueSize)

Point in time length of the compaction queue. This is the number of Stores in the RegionServer that have been targeted for compaction.

1.4.3.4. flushQueueSize

Point in time number of enqueued regions in the MemStore awaiting flush.

1.4.3.5. hdfsBlocksLocalityIndex

Point in time percentage of HDFS blocks that are local to this RegionServer. The higher the better.

1.4.3.6. memstoreSizeMB

Point in time sum of all the memstore sizes in this RegionServer (MB). Watch for this nearing or exceeding the configured high-watermark for MemStore memory in the RegionServer.

1.4.3.7. numberOfOnlineRegions

Point in time number of regions served by the RegionServer. This is an important metric to track for RegionServer-Region density.

1.4.3.8. readRequestsCount

Number of read requests for this RegionServer since startup. Note: this is a 32-bit integer and can roll.

1.4.3.9. slowHLogAppendCount

Number of slow HLog append writes for this RegionServer since startup, where "slow" is > 1 second. This is a good "canary" metric for HDFS.

1.4.3.10. usedHeapMB

Point in time amount of memory used by the RegionServer (MB).

1.4.3.11. writeRequestsCount

Number of write requests for this RegionServer since startup. Note: this is a 32-bit integer and can roll.

1.4.4. Other RegionServer Metrics

1.4.4.1. blockCacheCount

Point in time block cache item count in memory. This is the number of blocks of StoreFiles (HFiles) in the cache.

1.4.4.2. blockCacheEvictedCount

Number of blocks that had to be evicted from the block cache due to heap size constraints by RegionServer since startup.

1.4.4.3. blockCacheFreeMB

Point in time block cache memory available (MB).

1.4.4.4. blockCacheHitCount

Number of blocks of StoreFiles (HFiles) read from the cache by RegionServer since startup.

1.4.4.5. blockCacheHitRatio

Block cache hit ratio (0 to 100) from RegionServer startup. Includes all read requests, although those with cacheBlocks=false will always read from disk and be counted as a "cache miss", which means that full-scan MapReduce jobs can affect this metric significantly.

1.4.4.6. blockCacheMissCount

Number of blocks of StoreFiles (HFiles) requested but not read from the cache from RegionServer startup.

1.4.4.7. blockCacheSizeMB

Point in time block cache size in memory (MB). i.e., memory in use by the BlockCache

1.4.4.8. fsPreadLatency*

There are several filesystem positional read latency (ms) metrics, all measured from RegionServer startup.

1.4.4.9. fsReadLatency*

There are several filesystem read latency (ms) metrics, all measured from RegionServer startup. The issue with interpretation is that ALL reads go into this metric (e.g., single-record Gets, full table Scans), including reads required for compactions. This metric is only interesting "over time" when comparing major releases of HBase or your own code.

1.4.4.10. fsWriteLatency*

There are several filesystem write latency (ms) metrics, all measured from RegionServer startup. The issue with interpretation is that ALL writes go into this metric (e.g., single-record Puts, full table re-writes due to compaction). This metric is only interesting "over time" when comparing major releases of HBase or your own code.

1.4.4.11. NumberOfStores

Point in time number of Stores open on the RegionServer. A Store corresponds to a ColumnFamily. For example, if a table (which contains the column family) has 3 regions on a RegionServer, there will be 3 stores open for that column family.

1.4.4.12. NumberOfStorefiles

Point in time number of StoreFiles open on the RegionServer. A store may have more than one StoreFile (HFile).

1.4.4.13. requestsPerSecond

Point in time number of read and write requests. Requests correspond to RegionServer RPC calls, thus a single Get will result in 1 request, but a Scan with caching set to 1000 will result in 1 request for each 'next' call (i.e., not each row). A bulk-load request will constitute 1 request per HFile. This metric is less interesting than readRequestsCount and writeRequestsCount in terms of measuring activity due to this metric being periodic.

1.4.4.14. storeFileIndexSizeMB

Point in time sum of all the StoreFile index sizes in this RegionServer (MB)

1.5. HBase Monitoring

1.5.1. Overview

The following metrics are arguably the most important to monitor for each RegionServer for "macro monitoring", preferably with a system like OpenTSDB. If your cluster is having performance issues it's likely that you'll see something unusual with this group.

HBase:

OS:

  • IO Wait
  • User CPU

Java:

  • GC

For more information on HBase metrics, see Section 1.4, “HBase Metrics”.

1.5.2. Slow Query Log

The HBase slow query log consists of parseable JSON structures describing the properties of those client operations (Gets, Puts, Deletes, etc.) that either took too long to run, or produced too much output. The thresholds for "too long to run" and "too much output" are configurable, as described below. The output is produced inline in the main region server logs so that it is easy to discover further details from context with other logged events. It is also prepended with identifying tags (responseTooSlow), (responseTooLarge), (operationTooSlow), and (operationTooLarge) in order to enable easy filtering with grep, in case the user desires to see only slow queries.

1.5.2.1. Configuration

There are two configuration knobs that can be used to adjust the thresholds for when queries are logged.

  • hbase.ipc.warn.response.time Maximum number of milliseconds that a query can be run without being logged. Defaults to 10000, or 10 seconds. Can be set to -1 to disable logging by time.
  • hbase.ipc.warn.response.size Maximum byte size of response that a query can return without being logged. Defaults to 100 megabytes. Can be set to -1 to disable logging by size.

1.5.2.2. Metrics

The slow query log exposes to metrics to JMX.

  • hadoop.regionserver_rpc_slowResponse a global metric reflecting the durations of all responses that triggered logging.
  • hadoop.regionserver_rpc_methodName.aboveOneSec A metric reflecting the durations of all responses that lasted for more than one second.

1.5.2.3. Output

The output is tagged with operation e.g. (operationTooSlow) if the call was a client operation, such as a Put, Get, or Delete, which we expose detailed fingerprint information for. If not, it is tagged (responseTooSlow) and still produces parseable JSON output, but with less verbose information solely regarding its duration and size in the RPC itself. TooLarge is substituted for TooSlow if the response size triggered the logging, with TooLarge appearing even in the case that both size and duration triggered logging.

1.5.2.4. Example

2011-09-08 10:01:25,824 WARN org.apache.hadoop.ipc.HBaseServer: (operationTooSlow): {"tables":{"riley2":{"puts":[{"totalColumns":11,"families":{"actions":[{"timestamp":1315501284459,"qualifier":"0","vlen":9667580},{"timestamp":1315501284459,"qualifier":"1","vlen":10122412},{"timestamp":1315501284459,"qualifier":"2","vlen":11104617},{"timestamp":1315501284459,"qualifier":"3","vlen":13430635}]},"row":"cfcd208495d565ef66e7dff9f98764da:0"}],"families":["actions"]}},"processingtimems":956,"client":"10.47.34.63:33623","starttimems":1315501284456,"queuetimems":0,"totalPuts":1,"class":"HRegionServer","responsesize":0,"method":"multiPut"}

Note that everything inside the "tables" structure is output produced by MultiPut's fingerprint, while the rest of the information is RPC-specific, such as processing time and client IP/port. Other client operations follow the same pattern and the same general structure, with necessary differences due to the nature of the individual operations. In the case that the call is not a client operation, that detailed fingerprint information will be completely absent.

This particular example, for example, would indicate that the likely cause of slowness is simply a very large (on the order of 100MB) multiput, as we can tell by the "vlen," or value length, fields of each put in the multiPut.

1.6. Cluster Replication

See Cluster Replication.

1.7. HBase Backup

There are two broad strategies for performing HBase backups: backing up with a full cluster shutdown, and backing up on a live cluster. Each approach has pros and cons.

For additional information, see HBase Backup Options over on the Sematext Blog.

1.7.1. Full Shutdown Backup

Some environments can tolerate a periodic full shutdown of their HBase cluster, for example if it is being used a back-end analytic capacity and not serving front-end web-pages. The benefits are that the NameNode/Master are RegionServers are down, so there is no chance of missing any in-flight changes to either StoreFiles or metadata. The obvious con is that the cluster is down. The steps include:

1.7.1.1. Stop HBase

1.7.1.2. Distcp

Distcp could be used to either copy the contents of the HBase directory in HDFS to either the same cluster in another directory, or to a different cluster.

Note: Distcp works in this situation because the cluster is down and there are no in-flight edits to files. Distcp-ing of files in the HBase directory is not generally recommended on a live cluster.

1.7.1.3. Restore (if needed)

The backup of the hbase directory from HDFS is copied onto the 'real' hbase directory via distcp. The act of copying these files creates new HDFS metadata, which is why a restore of the NameNode edits from the time of the HBase backup isn't required for this kind of restore, because it's a restore (via distcp) of a specific HDFS directory (i.e., the HBase part) not the entire HDFS file-system.

1.7.2. Live Cluster Backup - Replication

This approach assumes that there is a second cluster. See the HBase page on replication for more information.

1.7.3. Live Cluster Backup - CopyTable

The Section 1.1.7, “CopyTable” utility could either be used to copy data from one table to another on the same cluster, or to copy data to another table on another cluster.

Since the cluster is up, there is a risk that edits could be missed in the copy process.

1.7.4. Live Cluster Backup - Export

The Section 1.1.8, “Export” approach dumps the content of a table to HDFS on the same cluster. To restore the data, the Section 1.1.9, “Import” utility would be used.

Since the cluster is up, there is a risk that edits could be missed in the export process.

1.8. HBase Snapshots

HBase Snapshots allow you to take a snapshot of a table without too much impact on Region Servers. Snapshot, Clone and restore operations don't involve data copying. Also, Exporting the snapshot to another cluster doesn't have impact on the Region Servers.

Prior to version 0.94.6, the only way to backup or to clone a table is to use CopyTable/ExportTable, or to copy all the hfiles in HDFS after disabling the table. The disadvantages of these methods are that you can degrade region server performance (Copy/Export Table) or you need to disable the table, that means no reads or writes; and this is usually unacceptable.

1.8.1. Configuration

To turn on the snapshot support just set the hbase.snapshot.enabled property to true. (Snapshots are enabled by default in 0.95+ and off by default in 0.94.6+)

  <property>
    <name>hbase.snapshot.enabled</name>
    <value>true</value>
  </property>
        

1.8.2. Take a Snapshot

You can take a snapshot of a table regardless of whether it is enabled or disabled. The snapshot operation doesn't involve any data copying.

    $ ./bin/hbase shell
    hbase> snapshot 'myTable', 'myTableSnapshot-122112'
        

1.8.3. Listing Snapshots

List all snapshots taken (by printing the names and relative information).

    $ ./bin/hbase shell
    hbase> list_snapshots
        

1.8.4. Deleting Snapshots

You can remove a snapshot, and the files retained for that snapshot will be removed if no longer needed.

    $ ./bin/hbase shell
    hbase> delete_snapshot 'myTableSnapshot-122112'
        

1.8.5. Clone a table from snapshot

From a snapshot you can create a new table (clone operation) with the same data that you had when the snapshot was taken. The clone operation, doesn't involve data copies, and a change to the cloned table doesn't impact the snapshot or the original table.

    $ ./bin/hbase shell
    hbase> clone_snapshot 'myTableSnapshot-122112', 'myNewTestTable'
        

1.8.6. Restore a snapshot

The restore operation requires the table to be disabled, and the table will be restored to the state at the time when the snapshot was taken, changing both data and schema if required.

    $ ./bin/hbase shell
    hbase> disable 'myTable'
    hbase> restore_snapshot 'myTableSnapshot-122112'
        

Note

Since Replication works at log level and snapshots at file-system level, after a restore, the replicas will be in a different state from the master. If you want to use restore, you need to stop replication and redo the bootstrap.

In case of partial data-loss due to misbehaving client, instead of a full restore that requires the table to be disabled, you can clone the table from the snapshot and use a Map-Reduce job to copy the data that you need, from the clone to the main one.

1.8.7. Snapshots operations and ACLs

If you are using security with the AccessController Coprocessor (See ???), only a global administrator can take, clone, or restore a snapshot, and these actions do not capture the ACL rights. This means that restoring a table preserves the ACL rights of the existing table, while cloning a table creates a new table that has no ACL rights until the administrator adds them.

1.8.8. Export to another cluster

The ExportSnapshot tool copies all the data related to a snapshot (hfiles, logs, snapshot metadata) to another cluster. The tool executes a Map-Reduce job, similar to distcp, to copy files between the two clusters, and since it works at file-system level the hbase cluster does not have to be online.

To copy a snapshot called MySnapshot to an HBase cluster srv2 (hdfs:///srv2:8082/hbase) using 16 mappers:

$ bin/hbase class org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot -snapshot MySnapshot -copy-to hdfs:///srv2:8082/hbase -mappers 16

1.9. Capacity Planning

1.9.1. Storage

A common question for HBase administrators is estimating how much storage will be required for an HBase cluster. There are several apsects to consider, the most important of which is what data load into the cluster. Start with a solid understanding of how HBase handles data internally (KeyValue).

1.9.1.1. KeyValue

HBase storage will be dominated by KeyValues. See ??? and ??? for how HBase stores data internally.

It is critical to understand that there is a KeyValue instance for every attribute stored in a row, and the rowkey-length, ColumnFamily name-length and attribute lengths will drive the size of the database more than any other factor.

1.9.1.2. StoreFiles and Blocks

KeyValue instances are aggregated into blocks, and the blocksize is configurable on a per-ColumnFamily basis. Blocks are aggregated into StoreFile's. See ???.

1.9.1.3. HDFS Block Replication

Because HBase runs on top of HDFS, factor in HDFS block replication into storage calculations.

1.9.2. Regions

Another common question for HBase administrators is determining the right number of regions per RegionServer. This affects both storage and hardware planning. See ???.

1.10. Table Rename

In versions 0.90.x of hbase and earlier, we had a simple script that would rename the hdfs table directory and then do an edit of the .META. table replacing all mentions of the old table name with the new. The script was called ./bin/rename_table.rb. The script was deprecated and removed mostly because it was unmaintained and the operation performed by the script was brutal.

As of hbase 0.94.x, you can use the snapshot facility renaming a table. Here is how you would do it using the hbase shell:

hbase shell> disable 'tableName'
hbase shell> snapshot 'tableName', 'tableSnapshot'
hbase shell> clone 'tableSnapshot', 'newTableName'
hbase shell> delete_snapshot 'tableSnapshot'
hbase shell> drop 'tableName'

or in code it would be as follows:

void rename(HBaseAdmin admin, String oldTableName, String newTableName) {
    String snapshotName = randomName();
    admin.disableTable(oldTableName);
    admin.snapshot(snapshotName, oldTableName);
    admin.cloneSnapshot(snapshotName, newTableName);
    admin.deleteSnapshot(snapshotName);
    admin.deleteTable(oldTableName);
}



[1] See this blog post for more details.

comments powered by Disqus