Table of Contents
This chapter is the Not-So-Quick start guide to Apache HBase configuration. It goes over system requirements, Hadoop setup, the different Apache HBase run modes, and the various configurations in HBase. Please read this chapter carefully. At a mimimum ensure that all Section 1.1, “Basic Prerequisites” have been satisfied. Failure to do so will cause you (and us) grief debugging strange errors and/or data loss.
Apache HBase uses the same configuration system as Apache Hadoop.
To configure a deploy, edit a file of environment variables
in conf/hbase-env.sh -- this configuration
is used mostly by the launcher shell scripts getting the cluster
off the ground -- and then add configuration to an XML file to
do things like override HBase defaults, tell HBase what Filesystem to
use, and the location of the ZooKeeper ensemble
[1]
.
When running in distributed mode, after you make
an edit to an HBase configuration, make sure you copy the
content of the conf directory to
all nodes of the cluster. HBase will not do this for you.
Use rsync.
This section lists required services and some required system configuration.
Just like Hadoop, HBase requires at least Java 6 from Oracle. Java 7 should work and can even be faster than Java 6, but almost all testing has been done on the latter at this point.
ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons. You must be able to ssh to all nodes, including your local node, using passwordless login (Google "ssh passwordless login"). If on mac osx, see the section, SSH: Setting up Remote Desktop and Enabling Self-Login on the hadoop wiki.
HBase uses the local hostname to self-report its IP address. Both forward and reverse DNS resolving must work in versions of HBase previous to 0.92.0 [2].
If your machine has multiple interfaces, HBase will use the interface that the primary hostname resolves to.
If this is insufficient, you can set
hbase.regionserver.dns.interface to indicate the
primary interface. This only works if your cluster configuration is
consistent and every host has the same network interface
configuration.
Another alternative is setting
hbase.regionserver.dns.nameserver to choose a
different nameserver than the system wide default.
HBase expects the loopback IP address to be 127.0.0.1. See Section 1.1.2.3, “Loopback IP”
The clocks on cluster members should be in basic alignments. Some skew is tolerable but wild skew could generate odd behaviors. Run NTP on your cluster, or an equivalent.
If you are having problems querying data, or "weird" cluster operations, check system time!
Apache HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to ???. You may also notice errors such as...
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)
You should also up the hbase users'
nproc setting; under load, a low-nproc
setting could manifest as OutOfMemoryError
[3]
[4].
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct. [5]
If you are on Ubuntu you will need to make the following changes:
In the file /etc/security/limits.conf add
a line like:
hadoop - nofile 32768
Replace hadoop with whatever user is running
Hadoop and HBase. If you have separate users, you will need 2
entries, one for each user. In the same file set nproc hard and soft
limits. For example:
hadoop soft/hard nproc 32000
.
In the file /etc/pam.d/common-session add
as the last line in the file:
session required pam_limits.so
Otherwise the changes in /etc/security/limits.conf won't be
applied.
Don't forget to log out and back in again for the changes to take effect!
Apache HBase has been little tested running on Windows. Running a production install of HBase on top of Windows is not recommended.
If you are running HBase on Windows, you must install Cygwin to have a *nix-like environment for the shell scripts. The full details are explained in the Windows Installation guide. Also search our user mailing list to pick up latest fixes figured by Windows users.
Selecting a Hadoop version is critical for your HBase deployment. Below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support
Table 1.1. Hadoop version support matrix
| HBase-0.92.x | HBase-0.94.x | HBase-0.95 | |
|---|---|---|---|
| Hadoop-0.20.205 | S | X | X |
| Hadoop-0.22.x | S | X | X |
| Hadoop-1.0.0-1.0.2[a] | S | S | X |
| Hadoop-1.0.3+ | S | S | S |
| Hadoop-1.1.x | NT | S | S |
| Hadoop-0.23.x | X | S | NT |
| Hadoop-2.x | X | S | S |
[a] HBase requires hadoop 1.0.3 at a minimum; there is an issue where we cannot find KerberosUtil compiling against earlier versions of Hadoop. | |||
Where
| S = supported and tested, |
| X = not supported, |
| NT = it should run, but not tested enough. |
Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its lib directory. The bundled jar is ONLY for use in standalone mode. In distributed mode, it is critical that the version of Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop version mismatch issues have various manifestations but often all looks like its hung up.
HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x. HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)
Apache HBase 0.96.0 requires Apache Hadoop 1.x at a minimum, and it can run equally well on hadoop-2.0. As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop[6].
HBase will lose data unless it is running on an HDFS that has a durable
sync implementation. DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync
[7]. Sync has to be explicitly enabled by setting
dfs.support.append equal
to true on both the client side -- in hbase-site.xml
-- and on the serverside in hdfs-site.xml (The sync
facility HBase needs is a subset of the append code path).
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
You will have to restart your cluster after making this edit. Ignore the chicken-little
comment you'll find in the hdfs-default.xml in the
description for the dfs.support.append configuration.
Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features as long as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version. If you want to read more about how to setup Secure HBase, see ???.
An Hadoop HDFS datanode has an upper bound on the number of
files that it will serve at any one time. The upper bound parameter is
called xcievers (yes, this is misspelled). Again,
before doing any loading, make sure you have configured Hadoop's
conf/hdfs-site.xml setting the
xceivers value to at least the following:
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
Be sure to restart your HDFS after making the above configuration.
Not having this configuration in place makes for strange looking
failures. Eventually you'll see a complain in the datanode logs
complaining about the xcievers exceeded, but on the run up to this one
manifestation is complaint about missing blocks. For example:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...
[8]
See also ???
HBase has two run modes: Section 1.2.1, “Standalone HBase” and Section 1.2.2, “Distributed”. Out of the box, HBase runs in
standalone mode. To set up a distributed deploy, you will need to
configure HBase by editing files in the HBase conf
directory.
Whatever your mode, you will need to edit
conf/hbase-env.sh to tell HBase which
java to use. In this file you set HBase environment
variables such as the heapsize and other options for the
JVM, the preferred location for log files,
etc. Set JAVA_HOME to point at the root of your
java install.
This is the default mode. Standalone mode is what is described in the ??? section. In standalone mode, HBase does not use HDFS -- it uses the local filesystem instead -- and it runs all HBase daemons and a local ZooKeeper all up in the same JVM. Zookeeper binds to a well known port so clients may talk to HBase.
Distributed mode can be subdivided into distributed but all daemons run on a single node -- a.k.a pseudo-distributed-- and fully-distributed where the daemons are spread across all nodes in the cluster [9].
Distributed modes require an instance of the Hadoop Distributed File System (HDFS). See the Hadoop requirements and instructions for how to set up a HDFS. Before proceeding, ensure you have an appropriate, working HDFS.
Below we describe the different distributed setups. Starting, verification and exploration of your install, whether a pseudo-distributed or fully-distributed configuration is described in a section that follows, Section 1.2.3, “Running and Confirming Your Installation”. The same verification script applies to both deploy types.
A pseudo-distributed mode is simply a distributed mode run on a single host. Use this configuration testing and prototyping on HBase. Do not use this configuration for production nor for evaluating HBase performance.
First, setup your HDFS in pseudo-distributed mode.
Next, configure HBase. Below is an example conf/hbase-site.xml.
This is the file into
which you add local customizations and overrides for
Section 1.3.1.1, “HBase Default Configuration” and Section 1.2.2.2.3, “HDFS Client Configuration”.
Note that the hbase.rootdir property points to the
local HDFS instance.
Now skip to Section 1.2.3, “Running and Confirming Your Installation” for how to start and verify your pseudo-distributed install. [10]
Let HBase create the hbase.rootdir
directory. If you don't, you'll get warning saying HBase needs a
migration run because the directory is missing files expected by
HBase (it'll create them if you let it).
Below is a sample pseudo-distributed file for the node h-24-30.example.com.
hbase-site.xml
<configuration>
...
<property>
<name>hbase.rootdir</name>
<value>hdfs://h-24-30.sfo.stumble.net:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>h-24-30.sfo.stumble.net</value>
</property>
...
</configuration>
To start up the initial HBase cluster...
% bin/start-hbase.sh
To start up an extra backup master(s) on the same server run...
% bin/local-master-backup.sh start 1
... the '1' means use ports 60001 & 60011, and this backup master's logfile will be at logs/hbase-${USER}-1-master-${HOSTNAME}.log.
To startup multiple backup masters run...
% bin/local-master-backup.sh start 2 3
You can start up to 9 backup masters (10 total).
To start up more regionservers...
% bin/local-regionservers.sh start 1
where '1' means use ports 60201 & 60301 and its logfile will be at logs/hbase-${USER}-1-regionserver-${HOSTNAME}.log.
To add 4 more regionservers in addition to the one you just started by running...
% bin/local-regionservers.sh start 2 3 4 5
This supports up to 99 extra regionservers (100 total).
For running a fully-distributed operation on more than one
host, make the following configurations. In
hbase-site.xml, add the property
hbase.cluster.distributed and set it to
true and point the HBase
hbase.rootdir at the appropriate HDFS NameNode
and location in HDFS where you would like HBase to write data. For
example, if you namenode were running at namenode.example.org on
port 8020 and you wanted to home your HBase in HDFS at
/hbase, make the following
configuration.
<configuration>
...
<property>
<name>hbase.rootdir</name>
<value>hdfs://namenode.example.org:8020/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
...
</configuration>
In addition, a fully-distributed mode requires that you
modify conf/regionservers. The
Section 1.4.1.2, “regionservers” file
lists all hosts that you would have running
HRegionServers, one host per line (This
file in HBase is like the Hadoop slaves
file). All servers listed in this file will be started and stopped
when HBase cluster start or stop is run.
See section ??? for ZooKeeper setup for HBase.
Of note, if you have made HDFS client configuration on your Hadoop cluster -- i.e. configuration you want HDFS clients to use as opposed to server-side configurations -- HBase will not see this configuration unless you do one of the following:
Add a pointer to your HADOOP_CONF_DIR
to the HBASE_CLASSPATH environment variable
in hbase-env.sh.
Add a copy of hdfs-site.xml (or
hadoop-site.xml) or, better, symlinks,
under ${HBASE_HOME}/conf, or
if only a small set of HDFS client configurations, add
them to hbase-site.xml.
An example of such an HDFS client configuration is
dfs.replication. If for example, you want to
run with a replication factor of 5, hbase will create files with
the default of 3 unless you do the above to make the configuration
available to HBase.
Make sure HDFS is running first. Start and stop the Hadoop HDFS
daemons by running bin/start-hdfs.sh over in the
HADOOP_HOME directory. You can ensure it started
properly by testing the put and
get of files into the Hadoop filesystem. HBase does
not normally use the mapreduce daemons. These do not need to be
started.
If you are managing your own ZooKeeper, start it and confirm its running else, HBase will start up ZooKeeper for you as part of its start process.
Start HBase with the following command:
bin/start-hbase.shRun the above from the
HBASE_HOME
directory.
You should now have a running HBase instance. HBase logs can be
found in the logs subdirectory. Check them out
especially if HBase had trouble starting.
HBase also puts up a UI listing vital attributes. By default its
deployed on the Master host at port 60010 (HBase RegionServers listen
on port 60020 by default and put up an informational http server at
60030). If the Master were running on a host named
master.example.org on the default port, to see the
Master's homepage you'd point your browser at
http://master.example.org:60010.
Once HBase has started, see the ??? for how to create tables, add data, scan your insertions, and finally disable and drop your tables.
To stop HBase after exiting the HBase shell enter
$ ./bin/stop-hbase.sh stopping hbase...............
Shutdown can take a moment to complete. It can take longer if your cluster is comprised of many machines. If you are running a distributed operation, be sure to wait until HBase has shut down completely before stopping the Hadoop daemons.
Just as in Hadoop where you add site-specific HDFS configuration
to the hdfs-site.xml file,
for HBase, site specific customizations go into
the file conf/hbase-site.xml.
For the list of configurable properties, see
Section 1.3.1.1, “HBase Default Configuration”
below or view the raw hbase-default.xml
source file in the HBase source code at
src/main/resources.
Not all configuration options make it out to
hbase-default.xml. Configuration
that it is thought rare anyone would change can exist only
in code; the only way to turn up such configurations is
via a reading of the source code itself.
Currently, changes here will require a cluster restart for HBase to notice the change.
The documentation below is generated using the default hbase configuration file,
hbase-default.xml, as source.
hbase.tmp.dirTemporary directory on the local filesystem. Change this setting to point to a location more permanent than '/tmp', the usual resolve for java.io.tmpdir, as the '/tmp' directory is cleared on machine restart.
Default: ${java.io.tmpdir}/hbase-${user.name}
hbase.rootdirThe directory shared by region servers and into which HBase persists. The URL should be 'fully-qualified' to include the filesystem scheme. For example, to specify the HDFS directory '/hbase' where the HDFS instance's namenode is running at namenode.example.org on port 9000, set this value to: hdfs://namenode.example.org:9000/hbase. By default, we write to whatever ${hbase.tmp.dir} is set too -- usually /tmp -- so change this configuration or else all data will be lost on machine restart.
Default: file://${hbase.tmp.dir}/hbase
hbase.cluster.distributedThe mode the cluster will be in. Possible values are false for standalone mode and true for distributed mode. If false, startup will run all HBase and ZooKeeper daemons together in the one JVM.
Default: false
hbase.zookeeper.quorumComma separated list of servers in the ZooKeeper ensemble (This config. should have been named hbase.zookeeper.ensemble). For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper ensemble servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which hbase will start/stop ZooKeeper on as part of cluster start/stop. Client-side, we will take this list of ensemble members and put it together with the hbase.zookeeper.clientPort config. and pass it into zookeeper constructor as the connectString parameter.
Default: localhost
hbase.local.dirDirectory on the local filesystem to be used as a local storage.
Default: ${hbase.tmp.dir}/local/
hbase.master.portThe port the HBase Master should bind to.
Default: 60000
hbase.master.info.portThe port for the HBase Master web UI. Set to -1 if you do not want a UI instance run.
Default: 60010
hbase.master.info.bindAddressThe bind address for the HBase Master web UI
Default: 0.0.0.0
hbase.master.logcleaner.pluginsA comma-separated list of LogCleanerDelegate invoked by the LogsCleaner service. These WAL/HLog cleaners are called in order, so put the HLog cleaner that prunes the most HLog files in front. To implement your own LogCleanerDelegate, just put it in HBase's classpath and add the fully qualified class name here. Always add the above default log cleaners in the list.
Default: org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner
hbase.master.logcleaner.ttlMaximum time a HLog can stay in the .oldlogdir directory, after which it will be cleaned by a Master thread.
Default: 600000
hbase.master.hfilecleaner.pluginsA comma-separated list of HFileCleanerDelegate invoked by the HFileCleaner service. These HFiles cleaners are called in order, so put the cleaner that prunes the most files in front. To implement your own HFileCleanerDelegate, just put it in HBase's classpath and add the fully qualified class name here. Always add the above default log cleaners in the list as they will be overwritten in hbase-site.xml.
Default: org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner
hbase.master.catalog.timeoutTimeout value for the Catalog Janitor from the master to META.
Default: 600000
hbase.master.dns.interfaceThe name of the Network Interface from which a master should report its IP address.
Default: default
hbase.master.dns.nameserverThe host name or IP address of the name server (DNS) which a master should use to determine the host name used for communication and display purposes.
Default: default
hbase.regionserver.portThe port the HBase RegionServer binds to.
Default: 60020
hbase.regionserver.info.portThe port for the HBase RegionServer web UI Set to -1 if you do not want the RegionServer UI to run.
Default: 60030
hbase.regionserver.info.bindAddressThe address for the HBase RegionServer web UI
Default: 0.0.0.0
hbase.regionserver.info.port.autoWhether or not the Master or RegionServer UI should search for a port to bind to. Enables automatic port search if hbase.regionserver.info.port is already in use. Useful for testing, turned off by default.
Default: false
hbase.regionserver.handler.countCount of RPC Listener instances spun up on RegionServers. Same property is used by the Master for count of master handlers.
Default: 30
hbase.regionserver.msgintervalInterval between messages from the RegionServer to Master in milliseconds.
Default: 3000
hbase.regionserver.optionallogflushintervalSync the HLog to the HDFS after this interval if it has not accumulated enough entries to trigger a sync. Units: milliseconds.
Default: 1000
hbase.regionserver.regionSplitLimitLimit for the number of regions after which no more region splitting should take place. This is not a hard limit for the number of regions but acts as a guideline for the regionserver to stop splitting after a certain limit. Default is MAX_INT; i.e. do not block splitting.
Default: 2147483647
hbase.regionserver.logroll.periodPeriod at which we will roll the commit log regardless of how many edits it has.
Default: 3600000
hbase.regionserver.logroll.errors.toleratedThe number of consecutive WAL close errors we will allow before triggering a server abort. A setting of 0 will cause the region server to abort if closing the current WAL writer fails during log rolling. Even a small value (2 or 3) will allow a region server to ride over transient HDFS errors.
Default: 2
hbase.regionserver.hlog.reader.implThe HLog file reader implementation.
Default: org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader
hbase.regionserver.hlog.writer.implThe HLog file writer implementation.
Default: org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter
hbase.regionserver.global.memstore.upperLimitMaximum size of all memstores in a region server before new updates are blocked and flushes are forced. Defaults to 40% of heap. Updates are blocked and flushes are forced until size of all memstores in a region server hits hbase.regionserver.global.memstore.lowerLimit.
Default: 0.4
hbase.regionserver.global.memstore.lowerLimitMaximum size of all memstores in a region server before flushes are forced. Defaults to 38% of heap. This value equal to hbase.regionserver.global.memstore.upperLimit causes the minimum possible flushing to occur when updates are blocked due to memstore limiting.
Default: 0.38
hbase.regionserver.optionalcacheflushintervalMaximum amount of time an edit lives in memory before being automatically flushed. Default 1 hour. Set it to 0 to disable automatic flushing.
Default: 3600000
hbase.regionserver.catalog.timeoutTimeout value for the Catalog Janitor from the regionserver to META.
Default: 600000
hbase.regionserver.dns.interfaceThe name of the Network Interface from which a region server should report its IP address.
Default: default
hbase.regionserver.dns.nameserverThe host name or IP address of the name server (DNS) which a region server should use to determine the host name used by the master for communication and display purposes.
Default: default
zookeeper.session.timeoutZooKeeper session timeout in milliseconds. It is used in two different ways. First, this value is used in the ZK client that HBase uses to connect to the ensemble. It is also used by HBase when it starts a ZK server and it is passed as the 'maxSessionTimeout'. See http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions. For example, if a HBase region server connects to a ZK ensemble that's also managed by HBase, then the session timeout will be the one specified by this configuration. But, a region server that connects to an ensemble managed with a different configuration will be subjected that ensemble's maxSessionTimeout. So, even though HBase might propose using 90 seconds, the ensemble can have a max timeout lower than this and it will take precedence. The current default that ZK ships with is 40 seconds, which is lower than HBase's.
Default: 90000
zookeeper.znode.parentRoot ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper files that are configured with a relative path will go under this node. By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed.
Default: /hbase
zookeeper.znode.rootserverPath to ZNode holding root region location. This is written by the master and read by clients and region servers. If a relative path is given, the parent folder will be ${zookeeper.znode.parent}. By default, this means the root location is stored at /hbase/root-region-server.
Default: root-region-server
zookeeper.znode.acl.parentRoot ZNode for access control lists.
Default: acl
hbase.zookeeper.dns.interfaceThe name of the Network Interface from which a ZooKeeper server should report its IP address.
Default: default
hbase.zookeeper.dns.nameserverThe host name or IP address of the name server (DNS) which a ZooKeeper server should use to determine the host name used by the master for communication and display purposes.
Default: default
hbase.zookeeper.peerportPort used by ZooKeeper peers to talk to each other. Seehttp://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.
Default: 2888
hbase.zookeeper.leaderportPort used by ZooKeeper for leader election. See http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.
Default: 3888
hbase.zookeeper.useMultiInstructs HBase to make use of ZooKeeper's multi-update functionality. This allows certain ZooKeeper operations to complete more quickly and prevents some issues with rare Replication failure scenarios (see the release note of HBASE-2611 for an example). IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+ and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).
Default: false
hbase.config.read.zookeeper.configSet to true to allow HBaseConfiguration to read the zoo.cfg file for ZooKeeper properties. Switching this to true is not recommended, since the functionality of reading ZK properties from a zoo.cfg file has been deprecated.
Default: false
hbase.zookeeper.property.initLimitProperty from ZooKeeper's config zoo.cfg. The number of ticks that the initial synchronization phase can take.
Default: 10
hbase.zookeeper.property.syncLimitProperty from ZooKeeper's config zoo.cfg. The number of ticks that can pass between sending a request and getting an acknowledgment.
Default: 5
hbase.zookeeper.property.dataDirProperty from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored.
Default: ${hbase.tmp.dir}/zookeeper
hbase.zookeeper.property.clientPortProperty from ZooKeeper's config zoo.cfg. The port at which the clients will connect.
Default: 2181
hbase.zookeeper.property.maxClientCnxnsProperty from ZooKeeper's config zoo.cfg. Limit on number of concurrent connections (at the socket level) that a single client, identified by IP address, may make to a single member of the ZooKeeper ensemble. Set high to avoid zk connection issues running standalone and pseudo-distributed.
Default: 300
hbase.client.write.bufferDefault size of the HTable client write buffer in bytes. A bigger buffer takes more memory -- on both the client and server side since server instantiates the passed write buffer to process it -- but a larger buffer size reduces the number of RPCs made. For an estimate of server-side memory-used, evaluate hbase.client.write.buffer * hbase.regionserver.handler.count
Default: 2097152
hbase.client.pauseGeneral client pause value. Used mostly as value to wait before running a retry of a failed get, region lookup, etc. See hbase.client.retries.number for description of how we backoff from this initial pause amount and how this pause works w/ retries.
Default: 100
hbase.client.retries.numberMaximum retries. Used as maximum for all retryable operations such as the getting of a cell's value, starting a row update, etc. Retry interval is a rough function based on hbase.client.pause. At first we retry at this interval but then with backoff, we pretty quickly reach retrying every ten seconds. See HConstants#RETRY_BACKOFF for how the backup ramps up. Change this setting and hbase.client.pause to suit your workload.
Default: 35
hbase.client.scanner.cachingNumber of rows that will be fetched when calling next on a scanner if it is not served from (local, client) memory. Higher caching values will enable faster scanners but will eat up more memory and some calls of next may take longer and longer times when the cache is empty. Do not set this value such that the time between invocations is greater than the scanner timeout; i.e. hbase.client.scanner.timeout.period
Default: 100
hbase.client.keyvalue.maxsizeSpecifies the combined maximum allowed size of a KeyValue instance. This is to set an upper boundary for a single entry saved in a storage file. Since they cannot be split it helps avoiding that a region cannot be split any further because the data is too large. It seems wise to set this to a fraction of the maximum region size. Setting it to zero or less disables the check.
Default: 10485760
hbase.client.scanner.timeout.periodClient scanner lease period in milliseconds.
Default: 60000
hbase.bulkload.retries.numberMaximum retries. This is maximum number of iterations to atomic bulk loads are attempted in the face of splitting operations 0 means never give up.
Default: 0
hbase.balancer.period
Period at which the region balancer runs in the Master.
Default: 300000
hbase.regions.slopRebalance if any regionserver has average + (average * slop) regions.
Default: 0.2
hbase.server.thread.wakefrequencyTime to sleep in between searches for work (in milliseconds). Used as sleep interval by service threads such as log roller.
Default: 10000
hbase.server.versionfile.writeattemptsHow many time to retry attempting to write a version file before just aborting. Each attempt is seperated by the hbase.server.thread.wakefrequency milliseconds.
Default: 3
hbase.hregion.memstore.flush.sizeMemstore will be flushed to disk if size of the memstore exceeds this number of bytes. Value is checked by a thread that runs every hbase.server.thread.wakefrequency.
Default: 134217728
hbase.hregion.preclose.flush.sizeIf the memstores in a region are this size or larger when we go to close, run a "pre-flush" to clear out memstores before we put up the region closed flag and take the region offline. On close, a flush is run under the close flag to empty memory. During this time the region is offline and we are not taking on any writes. If the memstore content is large, this flush could take a long time to complete. The preflush is meant to clean out the bulk of the memstore before putting up the close flag and taking the region offline so the flush that runs under the close flag has little to do.
Default: 5242880
hbase.hregion.memstore.block.multiplierBlock updates if memstore has hbase.hregion.block.memstore time hbase.hregion.flush.size bytes. Useful preventing runaway memstore during spikes in update traffic. Without an upper-bound, memstore fills such that when it flushes the resultant flush files take a long time to compact or split, or worse, we OOME.
Default: 2
hbase.hregion.memstore.mslab.enabledEnables the MemStore-Local Allocation Buffer, a feature which works to prevent heap fragmentation under heavy write loads. This can reduce the frequency of stop-the-world GC pauses on large heaps.
Default: true
hbase.hregion.max.filesizeMaximum HStoreFile size. If any one of a column families' HStoreFiles has grown to exceed this value, the hosting HRegion is split in two.
Default: 10737418240
hbase.hregion.majorcompactionThe time (in miliseconds) between 'major' compactions of all HStoreFiles in a region. Default: Set to 7 days. Major compactions tend to happen exactly when you need them least so enable them such that they run at off-peak for your deploy; or, since this setting is on a periodicity that is unlikely to match your loading, run the compactions via an external invocation out of a cron job or some such.
Default: 604800000
hbase.hregion.majorcompaction.jitterJitter outer bound for major compactions. On each regionserver, we multiply the hbase.region.majorcompaction interval by some random fraction that is inside the bounds of this maximum. We then add this + or - product to when the next major compaction is to run. The idea is that major compaction does happen on every regionserver at exactly the same time. The smaller this number, the closer the compactions come together.
Default: 0.50
hbase.hstore.compactionThresholdIf more than this number of HStoreFiles in any one HStore (one HStoreFile is written per flush of memstore) then a compaction is run to rewrite all HStoreFiles files as one. Larger numbers put off compaction but when it runs, it takes longer to complete.
Default: 3
hbase.hstore.blockingStoreFilesIf more than this number of StoreFiles in any one Store (one StoreFile is written per flush of MemStore) then updates are blocked for this HRegion until a compaction is completed, or until hbase.hstore.blockingWaitTime has been exceeded.
Default: 10
hbase.hstore.blockingWaitTimeThe time an HRegion will block updates for after hitting the StoreFile limit defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the HRegion will stop blocking updates even if a compaction has not been completed.
Default: 90000
hbase.hstore.compaction.maxMax number of HStoreFiles to compact per 'minor' compaction.
Default: 10
hbase.storescanner.parallel.seek.enableEnables StoreFileScanner parallel-seeking in StoreScanner, a feature which can reduce response latency under special conditions.
Default: false
hbase.storescanner.parallel.seek.threadsThe default thread pool size if parallel-seeking feature enabled.
Default: 10
hfile.block.cache.sizePercentage of maximum heap (-Xmx setting) to allocate to block cache used by HFile/StoreFile. Default of 0.4 means allocate 40%. Set to 0 to disable but it's not recommended; you need at least enough cache to hold the storefile indices.
Default: 0.4
hfile.block.index.cacheonwriteThis allows to put non-root multi-level index blocks into the block cache at the time the index is being written.
Default: false
hfile.index.block.max.sizeWhen the size of a leaf-level, intermediate-level, or root-level index block in a multi-level block index grows to this size, the block is written out and a new block is started.
Default: 131072
hfile.format.versionThe HFile format version to use for new files. Set this to 1 to test backwards-compatibility. The default value of this option should be consistent with FixedFileTrailer.MAX_VERSION.
Default: 2
hfile.block.bloom.cacheonwriteEnables cache-on-write for inline blocks of a compound Bloom filter.
Default: false
io.storefile.bloom.block.sizeThe size in bytes of a single block ("chunk") of a compound Bloom filter. This size is approximate, because Bloom blocks can only be inserted at data block boundaries, and the number of keys per data block varies.
Default: 131072
hbase.rs.cacheblocksonwriteWhether an HFile block should be added to the block cache when the block is finished.
Default: false
hbase.rpc.server.engineImplementation of org.apache.hadoop.hbase.ipc.RpcServerEngine to be used for server RPC call marshalling.
Default: org.apache.hadoop.hbase.ipc.ProtobufRpcServerEngine
hbase.rpc.timeoutThis is for the RPC layer to define how long HBase client applications take for a remote call to time out. It uses pings to check connections but will eventually throw a TimeoutException.
Default: 60000
hbase.rpc.shortoperation.timeoutThis is another version of "hbase.rpc.timeout". For those RPC operation within cluster, we rely on this configuration to set a short timeout limitation for short operation. For example, short rpc timeout for region server's trying to report to active master can benefit quicker master failover process.
Default: 10000
hbase.ipc.client.tcpnodelaySet no delay on rpc socket connections. See http://docs.oracle.com/javase/1.5.0/docs/api/java/net/Socket.html#getTcpNoDelay()
Default: true
hbase.master.keytab.fileFull path to the kerberos keytab file to use for logging in the configured HMaster server principal.
Default:
hbase.master.kerberos.principalEx. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the HMaster process. The principal name should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the running instance.
Default:
hbase.regionserver.keytab.fileFull path to the kerberos keytab file to use for logging in the configured HRegionServer server principal.
Default:
hbase.regionserver.kerberos.principalEx. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the HRegionServer process. The principal name should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the running instance. An entry for this principal must exist in the file specified in hbase.regionserver.keytab.file
Default:
hadoop.policy.fileThe policy configuration file used by RPC servers to make authorization decisions on client requests. Only used when HBase security is enabled.
Default: hbase-policy.xml
hbase.superuserList of users or groups (comma-separated), who are allowed full privileges, regardless of stored ACLs, across the cluster. Only used when HBase security is enabled.
Default:
hbase.auth.key.update.intervalThe update interval for master key for authentication tokens in servers in milliseconds. Only used when HBase security is enabled.
Default: 86400000
hbase.auth.token.max.lifetimeThe maximum lifetime in milliseconds after which an authentication token expires. Only used when HBase security is enabled.
Default: 604800000
hbase.ipc.client.fallback-to-simple-auth-allowedWhen a client is configured to attempt a secure connection, but attempts to connect to an insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure) authentication. This setting controls whether or not the client will accept this instruction from the server. When false (the default), the client will not allow the fallback to SIMPLE authentication, and will abort the connection.
Default: false
hbase.coprocessor.region.classesA comma-separated list of Coprocessors that are loaded by default on all tables. For any override coprocessor method, these classes will be called in order. After implementing your own Coprocessor, just put it in HBase's classpath and add the fully qualified class name here. A coprocessor can also be loaded on demand by setting HTableDescriptor.
Default:
hbase.rest.portThe port for the HBase REST server.
Default: 8080
hbase.rest.readonlyDefines the mode the REST server will be started in. Possible values are: false: All HTTP methods are permitted - GET/PUT/POST/DELETE. true: Only the GET method is permitted.
Default: false
hbase.rest.threads.maxThe maximum number of threads of the REST server thread pool. Threads in the pool are reused to process REST requests. This controls the maximum number of requests processed concurrently. It may help to control the memory used by the REST server to avoid OOM issues. If the thread pool is full, incoming requests will be queued up and wait for some free threads.
Default: 100
hbase.rest.threads.minThe minimum number of threads of the REST server thread pool. The thread pool always has at least these number of threads so the REST server is ready to serve incoming requests.
Default: 2
hbase.defaults.for.version.skipSet to true to skip the 'hbase.defaults.for.version' check. Setting this to true can be useful in contexts other than the other side of a maven generation; i.e. running in an ide. You'll want to set this boolean to true to avoid seeing the RuntimException complaint: "hbase-default.xml file seems to be for and old version of HBase (\${hbase.version}), this version is X.X.X-SNAPSHOT"
Default: false
hbase.coprocessor.master.classesA comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are loaded by default on the active HMaster process. For any implemented coprocessor methods, the listed classes will be called in order. After implementing your own MasterObserver, just put it in HBase's classpath and add the fully qualified class name here.
Default:
hbase.coprocessor.abortonerrorSet to true to cause the hosting server (master or regionserver) to abort if a coprocessor throws a Throwable object that is not IOException or a subclass of IOException. Setting it to true might be useful in development environments where one wants to terminate the server as soon as possible to simplify coprocessor failure analysis.
Default: false
hbase.online.schema.update.enableSet true to enable online schema changes.
Default: true
hbase.table.lock.enableSet to true to enable locking the table in zookeeper for schema change operations. Table locking from master prevents concurrent schema modifications to corrupt table state.
Default: true
hbase.thrift.minWorkerThreadsThe "core size" of the thread pool. New threads are created on every connection until this many threads are created.
Default: 16
hbase.thrift.maxWorkerThreadsThe maximum size of the thread pool. When the pending request queue overflows, new threads are created until their number reaches this number. After that, the server starts dropping connections.
Default: 1000
hbase.thrift.maxQueuedRequestsThe maximum number of pending Thrift connections waiting in the queue. If there are no idle threads in the pool, the server queues requests. Only when the queue overflows, new threads are added, up to hbase.thrift.maxQueuedRequests threads.
Default: 1000
hbase.thrift.htablepool.size.maxThe upper bound for the table pool used in the Thrift gateways server. Since this is per table name, we assume a single table and so with 1000 default worker threads max this is set to a matching number. For other workloads this number can be adjusted as needed.
Default: 1000
hbase.offheapcache.percentageThe amount of off heap space to be allocated towards the experimental off heap cache. If you desire the cache to be disabled, simply set this value to 0.
Default: 0
hbase.data.umask.enableEnable, if true, that file permissions should be assigned to the files written by the regionserver
Default: false
hbase.data.umaskFile permissions that should be used to write data files when hbase.data.umask.enable is true
Default: 000
hbase.metrics.showTableNameWhether to include the prefix "tbl.tablename" in per-column family metrics. If true, for each metric M, per-cf metrics will be reported for tbl.T.cf.CF.M, if false, per-cf metrics will be aggregated by column-family across tables, and reported for cf.CF.M. In both cases, the aggregated metric M across tables and cfs will be reported.
Default: true
hbase.metrics.exposeOperationTimesWhether to report metrics about time taken performing an operation on the region server. Get, Put, Delete, Increment, and Append can all have their times exposed through Hadoop metrics per CF and per region.
Default: true
hbase.snapshot.enabledSet to true to allow snapshots to be taken / restored / cloned.
Default: true
hbase.server.compactchecker.interval.multiplierThe number that determines how often we scan to see if compaction is necessary. Normally, compactions are done after some events (such as memstore flush), but if region didn't receive a lot of writes for some time, or due to different compaction policies, it may be necessary to check it periodically. The interval between checks is hbase.server.compactchecker.interval.multiplier multiplied by hbase.server.thread.wakefrequency.
Default: 1000
hbase.lease.recovery.timeoutHow long we wait on dfs lease recovery in total before giving up.
Default: 900000
hbase.lease.recovery.dfs.timeoutHow long between dfs recover lease invocations. Should be larger than the sum of the time it takes for the namenode to issue a block recovery command as part of datanode; dfs.heartbeat.interval and the time it takes for the primary datanode, performing block recovery to timeout on a dead datanode; usually dfs.socket.timeout. See the end of HBASE-8389 for more.
Default: 64000
hbase.regionserver.checksum.verifyIf set to true, HBase will read data and then verify checksums for hfile blocks. Checksum verification inside HDFS will be switched off. If the hbase-checksum verification fails, then it will switch back to using HDFS checksums.
Default: true
hbase.hstore.bytes.per.checksumNumber of bytes in a newly created checksum chunk for HBase-level checksums in hfile blocks.
Default: 16384
hbase.hstore.checksum.algorithmName of an algorithm that is used to compute checksums. Possible values are NULL, CRC32, CRC32C.
Default: CRC32
Set HBase environment variables in this file.
Examples include options to pass the JVM on start of
an HBase daemon such as heap size and garbarge collector configs.
You can also set configurations for HBase configuration, log directories,
niceness, ssh options, where to locate process pid files,
etc. Open the file at
conf/hbase-env.sh and peruse its content.
Each option is fairly well documented. Add your own environment
variables here if you want them read by HBase daemons on startup.
Changes here will require a cluster restart for HBase to notice the change.
Edit this file to change rate at which HBase files are rolled and to change the level at which HBase logs messages.
Changes here will require a cluster restart for HBase to notice the change though log levels can be changed for particular daemons via the HBase UI.
If you are running HBase in standalone mode, you don't need to configure anything for your client to work provided that they are all on the same machine.
Since the HBase Master may move around, clients bootstrap by looking to ZooKeeper for
current critical locations. ZooKeeper is where all these values are kept. Thus clients
require the location of the ZooKeeper ensemble information before they can do anything else.
Usually this the ensemble location is kept out in the hbase-site.xml and
is picked up by the client from the CLASSPATH.
If you are configuring an IDE to run a HBase client, you should
include the conf/ directory on your classpath so
hbase-site.xml settings can be found (or
add src/test/resources to pick up the hbase-site.xml
used by tests).
Minimally, a client of HBase needs several libraries in its CLASSPATH when connecting to a cluster, including:
commons-configuration (commons-configuration-1.6.jar) commons-lang (commons-lang-2.5.jar) commons-logging (commons-logging-1.1.1.jar) hadoop-core (hadoop-core-1.0.0.jar) hbase (hbase-0.92.0.jar) log4j (log4j-1.2.16.jar) slf4j-api (slf4j-api-1.5.8.jar) slf4j-log4j (slf4j-log4j12-1.5.8.jar) zookeeper (zookeeper-3.4.2.jar)
An example basic hbase-site.xml for client only
might look as follows:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>example1,example2,example3</value>
<description>The directory shared by region servers.
</description>
</property>
</configuration>
The configuration used by a Java client is kept
in an HBaseConfiguration instance.
The factory method on HBaseConfiguration, HBaseConfiguration.create();,
on invocation, will read in the content of the first hbase-site.xml found on
the client's CLASSPATH, if one is present
(Invocation will also factor in any hbase-default.xml found;
an hbase-default.xml ships inside the hbase.X.X.X.jar).
It is also possible to specify configuration directly without having to read from a
hbase-site.xml. For example, to set the ZooKeeper
ensemble for the cluster programmatically do as follows:
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost"); // Here we are running zookeeper locally
If multiple ZooKeeper instances make up your ZooKeeper ensemble,
they may be specified in a comma-separated list (just as in the hbase-site.xml file).
This populated Configuration instance can then be passed to an
HTable,
and so on.
Here is an example basic configuration for a distributed ten
node cluster. The nodes are named example0,
example1, etc., through node
example9 in this example. The HBase Master and the
HDFS namenode are running on the node example0.
RegionServers run on nodes
example1-example9. A 3-node
ZooKeeper ensemble runs on example1,
example2, and example3 on the
default ports. ZooKeeper data is persisted to the directory
/export/zookeeper. Below we show what the main
configuration files -- hbase-site.xml,
regionservers, and
hbase-env.sh -- found in the HBase
conf directory might look like.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>example1,example2,example3</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/export/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://example0:8020/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
</configuration>
In this file you list the nodes that will run RegionServers.
In our case, these nodes are example1-example9.
example1
example2
example3
example4
example5
example6
example7
example8
example9
Below we use a diff to show the differences
from default in the hbase-env.sh file. Here we
are setting the HBase heap to be 4G instead of the default
1G.
$ git diff hbase-env.sh
diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh
index e70ebc6..96f8c27 100644
--- a/conf/hbase-env.sh
+++ b/conf/hbase-env.sh
@@ -31,7 +31,7 @@ export JAVA_HOME=/usr/lib//jvm/java-6-sun/
# export HBASE_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
-# export HBASE_HEAPSIZE=1000
+export HBASE_HEAPSIZE=4096
# Extra Java runtime options.
# Below are what we set by default. May only work with SUN JVM.
Use rsync to copy the content of the
conf directory to all nodes of the
cluster.
Below we list what the important Configurations. We've divided this section into required configuration and worth-a-look recommended configs.
Review the Section 1.1.2, “Operating System” and Section 1.1.3, “Hadoop” sections.
If a cluster with a lot of regions, it is possible if an eager beaver
regionserver checks in soon after master start while all the rest in the
cluster are laggardly, this first server to checkin will be assigned all
regions. If lots of regions, this first server could buckle under the
load. To prevent the above scenario happening up the
hbase.master.wait.on.regionservers.mintostart from its
default value of 1. See
HBASE-6389 Modify the conditions to ensure that Master waits for sufficient number of Region Servers before starting region assignments
for more detail.
The default timeout is three minutes (specified in milliseconds). This means that if a server crashes, it will be three minutes before the Master notices the crash and starts recovery. You might like to tune the timeout down to a minute or even less so the Master notices failures the sooner. Before changing this value, be sure you have your JVM garbage collection configuration under control otherwise, a long garbage collection that lasts beyond the ZooKeeper session timeout will take out your RegionServer (You might be fine with this -- you probably want recovery to start on the server if a RegionServer has been in GC for a long period of time).
To change this configuration, edit hbase-site.xml,
copy the changed file around the cluster and restart.
We set this value high to save our having to field noob questions up on the mailing lists asking why a RegionServer went down during a massive import. The usual cause is that their JVM is untuned and they are running into long GC pauses. Our thinking is that while users are getting familiar with HBase, we'd save them having to know all of its intricacies. Later when they've built some confidence, then they can play with configuration such as this.
See ???.
This is the "...number of volumes that are allowed to fail before a datanode stops offering service. By default
any volume failure will cause a datanode to shutdown" from the hdfs-default.xml
description. If you have > three or four disks, you might want to set this to 1 or if you have many disks,
two or more.
This setting defines the number of threads that are kept open to answer incoming requests to user tables. The default of 10 is rather low in order to prevent users from killing their region servers when using large write buffers with a high number of concurrent clients. The rule of thumb is to keep this number low when the payload per request approaches the MB (big puts, scans using a large cache) and high when the payload is small (gets, small puts, ICVs, deletes).
It is safe to set that number to the maximum number of incoming clients if their payload is small, the typical example being a cluster that serves a website since puts aren't typically buffered and most of the operations are gets.
The reason why it is dangerous to keep this setting high is that the aggregate size of all the puts that are currently happening in a region server may impose too much pressure on its memory, or even trigger an OutOfMemoryError. A region server running on low memory will trigger its JVM's garbage collector to run more frequently up to a point where GC pauses become noticeable (the reason being that all the memory used to keep all the requests' payloads cannot be trashed, no matter how hard the garbage collector tries). After some time, the overall cluster throughput is affected since every request that hits that region server will take longer, which exacerbates the problem even more.
You can get a sense of whether you have too little or too many handlers by ??? on an individual RegionServer then tailing its logs (Queued requests consume memory).
HBase ships with a reasonable, conservative configuration that will work on nearly all machine types that people might want to test with. If you have larger machines -- HBase has 8G and larger heap -- you might the following configuration options helpful. TODO.
You should consider enabling ColumnFamily compression. There are several options that are near-frictionless and in most all cases boost performance by reducing the size of StoreFiles and thus reducing I/O.
See ??? for more information.
Consider going to larger regions to cut down on the total number of regions on your cluster. Generally less Regions to manage makes for a smoother running cluster (You can always later manually split the big Regions should one prove hot and you want to spread the request load over the cluster). A lower number of regions is preferred, generally in the range of 20 to low-hundreds per RegionServer. Adjust the regionsize as appropriate to achieve this number.
For the 0.90.x codebase, the upper-bound of regionsize is about 4Gb, with a default of 256Mb. For 0.92.x codebase, due to the HFile v2 change much larger regionsizes can be supported (e.g., 20Gb).
You may need to experiment with this setting based on your hardware configuration and application needs.
Adjust hbase.hregion.max.filesize in your hbase-site.xml.
RegionSize can also be set on a per-table basis via
HTableDescriptor.
Typically you want to keep your region count low on HBase for numerous reasons. Usually right around 100 regions per RegionServer has yielded the best results. Here are some of the reasons below for keeping region count low:
MSLAB requires 2mb per memstore (that's 2mb per family per region). 1000 regions that have 2 families each is 3.9GB of heap used, and it's not even storing data yet. NB: the 2MB value is configurable.
If you fill all the regions at somewhat the same rate, the global memory usage makes it that it forces tiny flushes when you have too many regions which in turn generates compactions. Rewriting the same data tens of times is the last thing you want. An example is filling 1000 regions (with one family) equally and let's consider a lower bound for global memstore usage of 5GB (the region server would have a big heap). Once it reaches 5GB it will force flush the biggest region, at that point they should almost all have about 5MB of data so it would flush that amount. 5MB inserted later, it would flush another region that will now have a bit over 5MB of data, and so on. A basic formula for the amount of regions to have per region server would look like this: Heap * upper global memstore limit = amount of heap devoted to memstore then the amount of heap devoted to memstore / (Number of regions per RS * CFs). This will give you the rough memstore size if everything is being written to. A more accurate formula is Heap * upper global memstore limit = amount of heap devoted to memstore then the amount of heap devoted to memstore / (Number of actively written regions per RS * CFs). This can allot you a higher region count from the write perspective if you know how many regions you will be writing to at one time.
The master as is is allergic to tons of regions, and will take a lot of time assigning them and moving them around in batches. The reason is that it's heavy on ZK usage, and it's not very async at the moment (could really be improved -- and has been imporoved a bunch in 0.96 hbase).
In older versions of HBase (pre-v2 hfile, 0.90 and previous), tons of regions on a few RS can cause the store file index to rise raising heap usage and can create memory pressure or OOME on the RSs
Another issue is the effect of the number of regions on mapreduce jobs. Keeping 5 regions per RS would be too low for a job, whereas 1000 will generate too many maps.
Rather than let HBase auto-split your Regions, manage the splitting manually
[11].
With growing amounts of data, splits will continually be needed. Since
you always know exactly what regions you have, long-term debugging and
profiling is much easier with manual splits. It is hard to trace the logs to
understand region level problems if it keeps splitting and getting renamed.
Data offlining bugs + unknown number of split regions == oh crap! If an
HLog or StoreFile
was mistakenly unprocessed by HBase due to a weird bug and
you notice it a day or so later, you can be assured that the regions
specified in these files are the same as the current regions and you have
less headaches trying to restore/replay your data.
You can finely tune your compaction algorithm. With roughly uniform data
growth, it's easy to cause split / compaction storms as the regions all
roughly hit the same data size at the same time. With manual splits, you can
let staggered, time-based major compactions spread out your network IO load.
How do I turn off automatic splitting? Automatic splitting is determined by the configuration value
hbase.hregion.max.filesize. It is not recommended that you set this
to Long.MAX_VALUE in case you forget about manual splits. A suggested setting
is 100GB, which would result in > 1hr major compactions if reached.
What's the optimal number of pre-split regions to create?
Mileage will vary depending upon your application.
You could start low with 10 pre-split regions / server and watch as data grows
over time. It's better to err on the side of too little regions and rolling split later.
A more complicated answer is that this depends upon the largest storefile
in your region. With a growing data size, this will get larger over time. You
want the largest region to be just big enough that the Store compact
selection algorithm only compacts it due to a timed major. If you don't, your
cluster can be prone to compaction storms as the algorithm decides to run
major compactions on a large series of regions all at once. Note that
compaction storms are due to the uniform data growth, not the manual split
decision.
If you pre-split your regions too thin, you can increase the major compaction
interval by configuring HConstants.MAJOR_COMPACTION_PERIOD. If your data size
grows too large, use the (post-0.90.0 HBase) org.apache.hadoop.hbase.util.RegionSplitter
script to perform a network IO safe rolling split
of all regions.
A common administrative technique is to manage major compactions manually, rather than letting
HBase do it. By default, HConstants.MAJOR_COMPACTION_PERIOD is one day and major compactions
may kick in when you least desire it - especially on a busy system. To turn off automatic major compactions set
the value to 0.
It is important to stress that major compactions are absolutely necessary for StoreFile cleanup, the only variant is when they occur. They can be administered through the HBase shell, or via HBaseAdmin.
For more information about compactions and the compaction file selection process, see ???
Speculative Execution of MapReduce tasks is on by default, and for HBase clusters it is generally advised to turn off
Speculative Execution at a system-level unless you need it for a specific case, where it can be configured per-job.
Set the properties mapred.map.tasks.speculative.execution and
mapred.reduce.tasks.speculative.execution to false.
The balancer is a periodic operation which is run on the master to redistribute regions on the cluster. It is configured via
hbase.balancer.period and defaults to 300000 (5 minutes).
See ??? for more information on the LoadBalancer.
Do not turn off block cache (You'd do it by setting hbase.block.cache.size to zero).
Currently we do not do well if you do this because the regionserver will spend all its time loading hfile
indices over and over again. If your working set it such that block cache does you no good, at least
size the block cache such that hfile indices will stay up in the cache (you can get a rough idea
on the size you need by surveying regionserver UIs; you'll see index block size accounted near the
top of the webpage).
If a big 40ms or so occasional delay is seen in operations against HBase, try the Nagles' setting. For example, see the user mailing list thread, Inconsistent scan performance with caching set to 1 and the issue cited therein where setting notcpdelay improved scan speeds. You might also see the graphs on the tail of HBASE-7008 Set scanner caching to a better default where our Lars Hofhansl tries various data sizes w/ Nagle's on and off measuring the effect.
See the Deveraj Das an Nicolas Liochon blog post Introduction to HBase Mean Time to Recover (MTTR) for a brief introduction. The issue HBASE-8354 forces Namenode into loop with lease recovery requests is messy but has a bunch of good discussion toward the end on low timeouts and how to effect faster recovery including citation of fixes added to HDFS. Read the Varun Sharma comments.
[1] Be careful editing XML. Make sure you close all elements. Run your file through xmllint or similar to ensure well-formedness of your document after an edit session.
[2] The hadoop-dns-checker tool can be used to verify DNS is working correctly on the cluster. The project README file provides detailed instructions on usage.
[3] See Jack Levin's major hdfs issues note up on the user list.
[4] The requirement that a database requires upping of system limits is not peculiar to Apache HBase. See for example the section Setting Shell Limits for the Oracle User in Short Guide to install Oracle 10 on Linux.
[5] A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?
[7] The Cloudera blog post An update on Apache Hadoop 1.0 by Charles Zedlweski has a nice exposition on how all the Hadoop versions relate. Its worth checking out if you are having trouble making sense of the Hadoop version morass.
[8] See Hadoop HDFS: Deceived by Xciever for an informative rant on xceivering.
[9] The pseudo-distributed vs fully-distributed nomenclature comes from Hadoop.
[10] See Section 1.2.2.1.2, “Pseudo-distributed Extras” for notes on how to start extra Masters and RegionServers when running pseudo-distributed.
[11] What follows is taken from the javadoc at the head of
the org.apache.hadoop.hbase.util.RegionSplitter tool
added to HBase post-0.90.0 release.