Chapter 1. Configuration

Table of Contents

1.1. Java
1.2. Operating System
1.2.1. ssh
1.2.2. DNS
1.2.3. NTP
1.2.4. ulimit and nproc
1.2.5. Windows
1.3. Hadoop
1.3.1. Hadoop Security
1.3.2. dfs.datanode.max.xcievers
1.4. HBase run modes: Standalone and Distributed
1.4.1. Standalone HBase
1.4.2. Distributed
1.4.3. Running and Confirming Your Installation
1.5. ZooKeeper
1.5.1. Using existing ZooKeeper ensemble
1.6. Configuration Files
1.6.1. hbase-site.xml and hbase-default.xml
1.6.2. hbase-env.sh
1.6.3. log4j.properties
1.6.4. Client configuration and dependencies connecting to an HBase cluster
1.7. Example Configurations
1.7.1. Basic Distributed HBase Install
1.8. The Important Configurations
1.8.1. Required Configurations
1.8.2. Recommended Configuations
1.8.3. Other Configurations
1.9. Bloom Filter Configuration
1.9.1. io.hfile.bloom.enabled global kill switch
1.9.2. io.hfile.bloom.error.rate
1.9.3. io.hfile.bloom.max.fold

This chapter is the Not-So-Quick start guide to HBase configuration.

Please read this chapter carefully and ensure that all requirements have been satisfied. Failure to do so will cause you (and us) grief debugging strange errors and/or data loss.

HBase uses the same configuration system as Hadoop. To configure a deploy, edit a file of environment variables in conf/hbase-env.sh -- this configuration is used mostly by the launcher shell scripts getting the cluster off the ground -- and then add configuration to an XML file to do things like override HBase defaults, tell HBase what Filesystem to use, and the location of the ZooKeeper ensemble [1] .

When running in distributed mode, after you make an edit to an HBase configuration, make sure you copy the content of the conf directory to all nodes of the cluster. HBase will not do this for you. Use rsync.

1.1. Java

Just like Hadoop, HBase requires java 6 from Oracle. Usually you'll want to use the latest version available except the problematic u18 (u24 is the latest version as of this writing).

1.2. Operating System

1.2.1. ssh

ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons. You must be able to ssh to all nodes, including your local node, using passwordless login (Google "ssh passwordless login").

1.2.2. DNS

HBase uses the local hostname to self-report it's IP address. Both forward and reverse DNS resolving should work.

If your machine has multiple interfaces, HBase will use the interface that the primary hostname resolves to.

If this is insufficient, you can set hbase.regionserver.dns.interface to indicate the primary interface. This only works if your cluster configuration is consistent and every host has the same network interface configuration.

Another alternative is setting hbase.regionserver.dns.nameserver to choose a different nameserver than the system wide default.

1.2.3. NTP

The clocks on cluster members should be in basic alignments. Some skew is tolerable but wild skew could generate odd behaviors. Run NTP on your cluster, or an equivalent.

If you are having problems querying data, or "weird" cluster operations, check system time!

1.2.4.  ulimit and nproc

HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to ???. You may also notice errors such as...

      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
      

Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)

You should also up the hbase users' nproc setting; under load, a low-nproc setting could manifest as OutOfMemoryError [2] [3].

To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct. [4]

1.2.4.1. ulimit on Ubuntu

If you are on Ubuntu you will need to make the following changes:

In the file /etc/security/limits.conf add a line like:

hadoop  -       nofile  32768

Replace hadoop with whatever user is running Hadoop and HBase. If you have separate users, you will need 2 entries, one for each user. In the same file set nproc hard and soft limits. For example:

hadoop soft/hard nproc 32000

.

In the file /etc/pam.d/common-session add as the last line in the file:

session required  pam_limits.so

Otherwise the changes in /etc/security/limits.conf won't be applied.

Don't forget to log out and back in again for the changes to take effect!

1.2.5. Windows

HBase has been little tested running on Windows. Running a production install of HBase on top of Windows is not recommended.

If you are running HBase on Windows, you must install Cygwin to have a *nix-like environment for the shell scripts. The full details are explained in the Windows Installation guide. Also search our user mailing list to pick up latest fixes figured by Windows users.

1.3. Hadoop

This version of HBase will only run on Hadoop 0.20.x. It will not run on hadoop 0.21.x (but may run on 0.22.x/0.23.x). HBase will lose data unless it is running on an HDFS that has a durable sync. Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version has a durable sync. You have to explicitly enable it though by setting dfs.support.append equal to true on both the client side -- in hbase-site.xml though it should be on in your base-default.xml file -- and on the serverside in hdfs-site.xml (You will have to restart your cluster after setting this configuration). Ignore the chicken-little comment you'll find in the hdfs-site.xml in the description for this configuration; it says it is not enabled because there are ... bugs in the 'append code' and is not supported in any production cluster. because it is not true (I'm sure there are bugs but the append code has been running in production at large scale deploys and is on by default in the offerings of hadoop by commercial vendors) [5] [6][7].

Or use the Cloudera or MapR distributions. Cloudera' CDH3 is Apache Hadoop 0.20.x plus patches including all of the branch-0.20-append additions needed to add a durable sync. Use the released, most recent version of CDH3.

MapR includes a commercial, reimplementation of HDFS. It has a durable sync as well as some other interesting features that are not yet in Apache Hadoop. Their M3 product is free to use and unlimited.

Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its lib directory. The bundled jar is ONLY for use in standalone mode. In distributed mode, it is critical that the version of Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop version mismatch issues have various manifestations but often all looks like its hung up.

1.3.1. Hadoop Security

HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features -- e.g. Y! 0.20S or CDH3B3 -- as long as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version.

1.3.2. dfs.datanode.max.xcievers

An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is called xcievers (yes, this is misspelled). Again, before doing any loading, make sure you have configured Hadoop's conf/hdfs-site.xml setting the xceivers value to at least the following:

      <property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
      </property>
      

Be sure to restart your HDFS after making the above configuration.

Not having this configuration in place makes for strange looking failures. Eventually you'll see a complain in the datanode logs complaining about the xcievers exceeded, but on the run up to this one manifestation is complaint about missing blocks. For example: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [8]

1.4. HBase run modes: Standalone and Distributed

HBase has two run modes: Section 1.4.1, “Standalone HBase” and Section 1.4.2, “Distributed”. Out of the box, HBase runs in standalone mode. To set up a distributed deploy, you will need to configure HBase by editing files in the HBase conf directory.

Whatever your mode, you will need to edit conf/hbase-env.sh to tell HBase which java to use. In this file you set HBase environment variables such as the heapsize and other options for the JVM, the preferred location for log files, etc. Set JAVA_HOME to point at the root of your java install.

1.4.1. Standalone HBase

This is the default mode. Standalone mode is what is described in the ??? section. In standalone mode, HBase does not use HDFS -- it uses the local filesystem instead -- and it runs all HBase daemons and a local ZooKeeper all up in the same JVM. Zookeeper binds to a well known port so clients may talk to HBase.

1.4.2. Distributed

Distributed mode can be subdivided into distributed but all daemons run on a single node -- a.k.a pseudo-distributed-- and fully-distributed where the daemons are spread across all nodes in the cluster [9].

Distributed modes require an instance of the Hadoop Distributed File System (HDFS). See the Hadoop requirements and instructions for how to set up a HDFS. Before proceeding, ensure you have an appropriate, working HDFS.

Below we describe the different distributed setups. Starting, verification and exploration of your install, whether a pseudo-distributed or fully-distributed configuration is described in a section that follows, Section 1.4.3, “Running and Confirming Your Installation”. The same verification script applies to both deploy types.

1.4.2.1. Pseudo-distributed

A pseudo-distributed mode is simply a distributed mode run on a single host. Use this configuration testing and prototyping on HBase. Do not use this configuration for production nor for evaluating HBase performance.

Once you have confirmed your HDFS setup, edit conf/hbase-site.xml. This is the file into which you add local customizations and overrides for <xreg></xreg> and Section 1.4.2.2.3, “HDFS Client Configuration”. Point HBase at the running Hadoop HDFS instance by setting the hbase.rootdir property. This property points HBase at the Hadoop filesystem instance to use. For example, adding the properties below to your hbase-site.xml says that HBase should use the /hbase directory in the HDFS whose namenode is at port 8020 on your local machine, and that it should run with one replica only (recommended for pseudo-distributed mode):

<configuration>
  ...
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:8020/hbase</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.
    </description>
  </property>
  ...
</configuration>

Note

Let HBase create the hbase.rootdir directory. If you don't, you'll get warning saying HBase needs a migration run because the directory is missing files expected by HBase (it'll create them if you let it).

Note

Above we bind to localhost. This means that a remote client cannot connect. Amend accordingly, if you want to connect from a remote location.

Now skip to Section 1.4.3, “Running and Confirming Your Installation” for how to start and verify your pseudo-distributed install. [10]

1.4.2.2. Fully-distributed

For running a fully-distributed operation on more than one host, make the following configurations. In hbase-site.xml, add the property hbase.cluster.distributed and set it to true and point the HBase hbase.rootdir at the appropriate HDFS NameNode and location in HDFS where you would like HBase to write data. For example, if you namenode were running at namenode.example.org on port 8020 and you wanted to home your HBase in HDFS at /hbase, make the following configuration.

<configuration>
  ...
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://namenode.example.org:8020/hbase</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
    </description>
  </property>
  ...
</configuration>
1.4.2.2.1. regionservers

In addition, a fully-distributed mode requires that you modify conf/regionservers. The Section 1.7.1.2, “regionservers file lists all hosts that you would have running HRegionServers, one host per line (This file in HBase is like the Hadoop slaves file). All servers listed in this file will be started and stopped when HBase cluster start or stop is run.

1.4.2.2.2. ZooKeeper and HBase

See section Section 1.5, “ZooKeeper” for ZooKeeper setup for HBase.

1.4.2.2.3. HDFS Client Configuration

Of note, if you have made HDFS client configuration on your Hadoop cluster -- i.e. configuration you want HDFS clients to use as opposed to server-side configurations -- HBase will not see this configuration unless you do one of the following:

  • Add a pointer to your HADOOP_CONF_DIR to the HBASE_CLASSPATH environment variable in hbase-env.sh.

  • Add a copy of hdfs-site.xml (or hadoop-site.xml) or, better, symlinks, under ${HBASE_HOME}/conf, or

  • if only a small set of HDFS client configurations, add them to hbase-site.xml.

An example of such an HDFS client configuration is dfs.replication. If for example, you want to run with a replication factor of 5, hbase will create files with the default of 3 unless you do the above to make the configuration available to HBase.

1.4.3. Running and Confirming Your Installation

Make sure HDFS is running first. Start and stop the Hadoop HDFS daemons by running bin/start-hdfs.sh over in the HADOOP_HOME directory. You can ensure it started properly by testing the put and get of files into the Hadoop filesystem. HBase does not normally use the mapreduce daemons. These do not need to be started.

If you are managing your own ZooKeeper, start it and confirm its running else, HBase will start up ZooKeeper for you as part of its start process.

Start HBase with the following command:

bin/start-hbase.sh
Run the above from the HBASE_HOME directory.

You should now have a running HBase instance. HBase logs can be found in the logs subdirectory. Check them out especially if HBase had trouble starting.

HBase also puts up a UI listing vital attributes. By default its deployed on the Master host at port 60010 (HBase RegionServers listen on port 60020 by default and put up an informational http server at 60030). If the Master were running on a host named master.example.org on the default port, to see the Master's homepage you'd point your browser at http://master.example.org:60010.

Once HBase has started, see the ??? for how to create tables, add data, scan your insertions, and finally disable and drop your tables.

To stop HBase after exiting the HBase shell enter

$ ./bin/stop-hbase.sh
stopping hbase...............

Shutdown can take a moment to complete. It can take longer if your cluster is comprised of many machines. If you are running a distributed operation, be sure to wait until HBase has shut down completely before stopping the Hadoop daemons.

1.5. ZooKeeper

A distributed HBase depends on a running ZooKeeper cluster. All participating nodes and clients need to be able to access the running ZooKeeper ensemble. HBase by default manages a ZooKeeper "cluster" for you. It will start and stop the ZooKeeper ensemble as part of the HBase start/stop process. You can also manage the ZooKeeper ensemble independent of HBase and just point HBase at the cluster it should use. To toggle HBase management of ZooKeeper, use the HBASE_MANAGES_ZK variable in conf/hbase-env.sh. This variable, which defaults to true, tells HBase whether to start/stop the ZooKeeper ensemble servers as part of HBase start/stop.

When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration using its native zoo.cfg file, or, the easier option is to just specify ZooKeeper options directly in conf/hbase-site.xml. A ZooKeeper configuration option can be set as a property in the HBase hbase-site.xml XML configuration file by prefacing the ZooKeeper option name with hbase.zookeeper.property. For example, the clientPort setting in ZooKeeper can be changed by setting the hbase.zookeeper.property.clientPort property. For all default values used by HBase, including ZooKeeper configuration, see Section 1.6.1.1, “HBase Default Configuration”. Look for the hbase.zookeeper.property prefix [11]

You must at least list the ensemble servers in hbase-site.xml using the hbase.zookeeper.quorum property. This property defaults to a single ensemble member at localhost which is not suitable for a fully distributed HBase. (It binds to the local machine only and remote clients will not be able to connect).

How many ZooKeepers should I run?

You can run a ZooKeeper ensemble that comprises 1 node only but in production it is recommended that you run a ZooKeeper ensemble of 3, 5 or 7 machines; the more members an ensemble has, the more tolerant the ensemble is of host failures. Also, run an odd number of machines. There can be no quorum if the number of members is an even number. Give each ZooKeeper server around 1GB of RAM, and if possible, its own dedicated disk (A dedicated disk is the best thing you can do to ensure a performant ZooKeeper ensemble). For very heavily loaded clusters, run ZooKeeper servers on separate machines from RegionServers (DataNodes and TaskTrackers).

For example, to have HBase manage a ZooKeeper quorum on nodes rs{1,2,3,4,5}.example.com, bound to port 2222 (the default is 2181) ensure HBASE_MANAGE_ZK is commented out or set to true in conf/hbase-env.sh and then edit conf/hbase-site.xml and set hbase.zookeeper.property.clientPort and hbase.zookeeper.quorum. You should also set hbase.zookeeper.property.dataDir to other than the default as the default has ZooKeeper persist data under /tmp which is often cleared on system restart. In the example below we have ZooKeeper persist to /user/local/zookeeper.

  <configuration>
    ...
    <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2222</value>
      <description>Property from ZooKeeper's config zoo.cfg.
      The port at which the clients will connect.
      </description>
    </property>
    <property>
      <name>hbase.zookeeper.quorum</name>
      <value>rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com</value>
      <description>Comma separated list of servers in the ZooKeeper Quorum.
      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
      By default this is set to localhost for local and pseudo-distributed modes
      of operation. For a fully-distributed setup, this should be set to a full
      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
      this is the list of servers which we will start/stop ZooKeeper on.
      </description>
    </property>
    <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/usr/local/zookeeper</value>
      <description>Property from ZooKeeper's config zoo.cfg.
      The directory where the snapshot is stored.
      </description>
    </property>
    ...
  </configuration>

1.5.1. Using existing ZooKeeper ensemble

To point HBase at an existing ZooKeeper cluster, one that is not managed by HBase, set HBASE_MANAGES_ZK in conf/hbase-env.sh to false

  ...
  # Tell HBase whether it should manage it's own instance of Zookeeper or not.
  export HBASE_MANAGES_ZK=false

Next set ensemble locations and client port, if non-standard, in hbase-site.xml, or add a suitably configured zoo.cfg to HBase's CLASSPATH. HBase will prefer the configuration found in zoo.cfg over any settings in hbase-site.xml.

When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a part of the regular start/stop scripts. If you would like to run ZooKeeper yourself, independent of HBase start/stop, you would do the following

${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper

Note that you can use HBase in this manner to spin up a ZooKeeper cluster, unrelated to HBase. Just make sure to set HBASE_MANAGES_ZK to false if you want it to stay up across HBase restarts so that when HBase shuts down, it doesn't take ZooKeeper down with it.

For more information about running a distinct ZooKeeper cluster, see the ZooKeeper Getting Started Guide. Additionally, see the ZooKeeper Wiki or the ZooKeeper documentation for more information on ZooKeeper sizing.

1.6. Configuration Files

1.6.1. hbase-site.xml and hbase-default.xml

Just as in Hadoop where you add site-specific HDFS configuration to the hdfs-site.xml file, for HBase, site specific customizations go into the file conf/hbase-site.xml. For the list of configurable properties, see Section 1.6.1.1, “HBase Default Configuration” below or view the raw hbase-default.xml source file in the HBase source code at src/main/resources.

Not all configuration options make it out to hbase-default.xml. Configuration that it is thought rare anyone would change can exist only in code; the only way to turn up such configurations is via a reading of the source code itself.

Currently, changes here will require a cluster restart for HBase to notice the change.

1.6.1.1. HBase Default Configuration

HBase Default Configuration

The documentation below is generated using the default hbase configuration file, hbase-default.xml, as source.

hbase.rootdir

The directory shared by region servers and into which HBase persists. The URL should be 'fully-qualified' to include the filesystem scheme. For example, to specify the HDFS directory '/hbase' where the HDFS instance's namenode is running at namenode.example.org on port 9000, set this value to: hdfs://namenode.example.org:9000/hbase. By default HBase writes into /tmp. Change this configuration else all data will be lost on machine restart.

Default: file:///tmp/hbase-${user.name}/hbase

hbase.master.port

The port the HBase Master should bind to.

Default: 60000

hbase.cluster.distributed

The mode the cluster will be in. Possible values are false for standalone mode and true for distributed mode. If false, startup will run all HBase and ZooKeeper daemons together in the one JVM.

Default: false

hbase.tmp.dir

Temporary directory on the local filesystem. Change this setting to point to a location more permanent than '/tmp' (The '/tmp' directory is often cleared on machine restart).

Default: /tmp/hbase-${user.name}

hbase.master.info.port

The port for the HBase Master web UI. Set to -1 if you do not want a UI instance run.

Default: 60010

hbase.master.info.bindAddress

The bind address for the HBase Master web UI

Default: 0.0.0.0

hbase.client.write.buffer

Default size of the HTable clien write buffer in bytes. A bigger buffer takes more memory -- on both the client and server side since server instantiates the passed write buffer to process it -- but a larger buffer size reduces the number of RPCs made. For an estimate of server-side memory-used, evaluate hbase.client.write.buffer * hbase.regionserver.handler.count

Default: 2097152

hbase.regionserver.port

The port the HBase RegionServer binds to.

Default: 60020

hbase.regionserver.info.port

The port for the HBase RegionServer web UI Set to -1 if you do not want the RegionServer UI to run.

Default: 60030

hbase.regionserver.info.port.auto

Whether or not the Master or RegionServer UI should search for a port to bind to. Enables automatic port search if hbase.regionserver.info.port is already in use. Useful for testing, turned off by default.

Default: false

hbase.regionserver.info.bindAddress

The address for the HBase RegionServer web UI

Default: 0.0.0.0

hbase.regionserver.class

The RegionServer interface to use. Used by the client opening proxy to remote region server.

Default: org.apache.hadoop.hbase.ipc.HRegionInterface

hbase.client.pause

General client pause value. Used mostly as value to wait before running a retry of a failed get, region lookup, etc.

Default: 1000

hbase.client.retries.number

Maximum retries. Used as maximum for all retryable operations such as fetching of the root region from root region server, getting a cell's value, starting a row update, etc. Default: 10.

Default: 10

hbase.bulkload.retries.number

Maximum retries. This is maximum number of iterations to atomic bulk loads are attempted in the face of splitting operations 0 means never give up. Default: 0.

Default: 0

hbase.client.scanner.caching

Number of rows that will be fetched when calling next on a scanner if it is not served from (local, client) memory. Higher caching values will enable faster scanners but will eat up more memory and some calls of next may take longer and longer times when the cache is empty. Do not set this value such that the time between invocations is greater than the scanner timeout; i.e. hbase.regionserver.lease.period

Default: 1

hbase.client.keyvalue.maxsize

Specifies the combined maximum allowed size of a KeyValue instance. This is to set an upper boundary for a single entry saved in a storage file. Since they cannot be split it helps avoiding that a region cannot be split any further because the data is too large. It seems wise to set this to a fraction of the maximum region size. Setting it to zero or less disables the check.

Default: 10485760

hbase.regionserver.lease.period

HRegion server lease period in milliseconds. Default is 60 seconds. Clients must report in within this period else they are considered dead.

Default: 60000

hbase.regionserver.handler.count

Count of RPC Listener instances spun up on RegionServers. Same property is used by the Master for count of master handlers. Default is 10.

Default: 10

hbase.regionserver.msginterval

Interval between messages from the RegionServer to Master in milliseconds.

Default: 3000

hbase.regionserver.optionallogflushinterval

Sync the HLog to the HDFS after this interval if it has not accumulated enough entries to trigger a sync. Default 1 second. Units: milliseconds.

Default: 1000

hbase.regionserver.regionSplitLimit

Limit for the number of regions after which no more region splitting should take place. This is not a hard limit for the number of regions but acts as a guideline for the regionserver to stop splitting after a certain limit. Default is set to MAX_INT; i.e. do not block splitting.

Default: 2147483647

hbase.regionserver.logroll.period

Period at which we will roll the commit log regardless of how many edits it has.

Default: 3600000

hbase.regionserver.logroll.errors.tolerated

The number of consecutive WAL close errors we will allow before triggering a server abort. A setting of 0 will cause the region server to abort if closing the current WAL writer fails during log rolling. Even a small value (2 or 3) will allow a region server to ride over transient HDFS errors.

Default: 2

hbase.regionserver.hlog.reader.impl

The HLog file reader implementation.

Default: org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader

hbase.regionserver.hlog.writer.impl

The HLog file writer implementation.

Default: org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter

hbase.regionserver.nbreservationblocks

The number of resevoir blocks of memory release on OOME so we can cleanup properly before server shutdown.

Default: 4

hbase.zookeeper.dns.interface

The name of the Network Interface from which a ZooKeeper server should report its IP address.

Default: default

hbase.zookeeper.dns.nameserver

The host name or IP address of the name server (DNS) which a ZooKeeper server should use to determine the host name used by the master for communication and display purposes.

Default: default

hbase.regionserver.dns.interface

The name of the Network Interface from which a region server should report its IP address.

Default: default

hbase.regionserver.dns.nameserver

The host name or IP address of the name server (DNS) which a region server should use to determine the host name used by the master for communication and display purposes.

Default: default

hbase.master.dns.interface

The name of the Network Interface from which a master should report its IP address.

Default: default

hbase.master.dns.nameserver

The host name or IP address of the name server (DNS) which a master should use to determine the host name used for communication and display purposes.

Default: default

hbase.balancer.period

Period at which the region balancer runs in the Master.

Default: 300000

hbase.regions.slop

Rebalance if any regionserver has average + (average * slop) regions. Default is 20% slop.

Default: 0.2

hbase.master.logcleaner.ttl

Maximum time a HLog can stay in the .oldlogdir directory, after which it will be cleaned by a Master thread.

Default: 600000

hbase.master.logcleaner.plugins

A comma-separated list of LogCleanerDelegate invoked by the LogsCleaner service. These WAL/HLog cleaners are called in order, so put the HLog cleaner that prunes the most HLog files in front. To implement your own LogCleanerDelegate, just put it in HBase's classpath and add the fully qualified class name here. Always add the above default log cleaners in the list.

Default: org.apache.hadoop.hbase.master.TimeToLiveLogCleaner

hbase.regionserver.global.memstore.upperLimit

Maximum size of all memstores in a region server before new updates are blocked and flushes are forced. Defaults to 40% of heap

Default: 0.4

hbase.regionserver.global.memstore.lowerLimit

When memstores are being forced to flush to make room in memory, keep flushing until we hit this mark. Defaults to 35% of heap. This value equal to hbase.regionserver.global.memstore.upperLimit causes the minimum possible flushing to occur when updates are blocked due to memstore limiting.

Default: 0.35

hbase.server.thread.wakefrequency

Time to sleep in between searches for work (in milliseconds). Used as sleep interval by service threads such as log roller.

Default: 10000

hbase.server.versionfile.writeattempts

How many time to retry attempting to write a version file before just aborting. Each attempt is seperated by the hbase.server.thread.wakefrequency milliseconds.

Default: 3

hbase.hregion.memstore.flush.size

Memstore will be flushed to disk if size of the memstore exceeds this number of bytes. Value is checked by a thread that runs every hbase.server.thread.wakefrequency.

Default: 134217728

hbase.hregion.preclose.flush.size

If the memstores in a region are this size or larger when we go to close, run a "pre-flush" to clear out memstores before we put up the region closed flag and take the region offline. On close, a flush is run under the close flag to empty memory. During this time the region is offline and we are not taking on any writes. If the memstore content is large, this flush could take a long time to complete. The preflush is meant to clean out the bulk of the memstore before putting up the close flag and taking the region offline so the flush that runs under the close flag has little to do.

Default: 5242880

hbase.hregion.memstore.block.multiplier

Block updates if memstore has hbase.hregion.block.memstore time hbase.hregion.flush.size bytes. Useful preventing runaway memstore during spikes in update traffic. Without an upper-bound, memstore fills such that when it flushes the resultant flush files take a long time to compact or split, or worse, we OOME.

Default: 2

hbase.hregion.memstore.mslab.enabled

Enables the MemStore-Local Allocation Buffer, a feature which works to prevent heap fragmentation under heavy write loads. This can reduce the frequency of stop-the-world GC pauses on large heaps.

Default: true

hbase.hregion.max.filesize

Maximum HStoreFile size. If any one of a column families' HStoreFiles has grown to exceed this value, the hosting HRegion is split in two. Default: 1G.

Default: 1073741824

hbase.hstore.compactionThreshold

If more than this number of HStoreFiles in any one HStore (one HStoreFile is written per flush of memstore) then a compaction is run to rewrite all HStoreFiles files as one. Larger numbers put off compaction but when it runs, it takes longer to complete.

Default: 3

hbase.hstore.blockingStoreFiles

If more than this number of StoreFiles in any one Store (one StoreFile is written per flush of MemStore) then updates are blocked for this HRegion until a compaction is completed, or until hbase.hstore.blockingWaitTime has been exceeded.

Default: 7

hbase.hstore.blockingWaitTime

The time an HRegion will block updates for after hitting the StoreFile limit defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the HRegion will stop blocking updates even if a compaction has not been completed. Default: 90 seconds.

Default: 90000

hbase.hstore.compaction.max

Max number of HStoreFiles to compact per 'minor' compaction.

Default: 10

hbase.hregion.majorcompaction

The time (in miliseconds) between 'major' compactions of all HStoreFiles in a region. Default: 1 day. Set to 0 to disable automated major compactions.

Default: 86400000

hbase.mapreduce.hfileoutputformat.blocksize

The mapreduce HFileOutputFormat writes storefiles/hfiles. This is the minimum hfile blocksize to emit. Usually in hbase, writing hfiles, the blocksize is gotten from the table schema (HColumnDescriptor) but in the mapreduce outputformat context, we don't have access to the schema so get blocksize from Configuation. The smaller you make the blocksize, the bigger your index and the less you fetch on a random-access. Set the blocksize down if you have small cells and want faster random-access of individual cells.

Default: 65536

hfile.block.cache.size

Percentage of maximum heap (-Xmx setting) to allocate to block cache used by HFile/StoreFile. Default of 0.25 means allocate 25%. Set to 0 to disable but it's not recommended.

Default: 0.25

hbase.hash.type

The hashing algorithm for use in HashFunction. Two values are supported now: murmur (MurmurHash) and jenkins (JenkinsHash). Used by bloom filters.

Default: murmur

hfile.block.index.cacheonwrite

This allows to put non-root multi-level index blocks into the block cache at the time the index is being written.

Default: false

hfile.index.block.max.size

When the size of a leaf-level, intermediate-level, or root-level index block in a multi-level block index grows to this size, the block is written out and a new block is started.

Default: 131072

hfile.format.version

The HFile format version to use for new files. Set this to 1 to test backwards-compatibility. The default value of this option should be consistent with FixedFileTrailer.MAX_VERSION.

Default: 2

io.storefile.bloom.block.size

The size in bytes of a single block ("chunk") of a compound Bloom filter. This size is approximate, because Bloom blocks can only be inserted at data block boundaries, and the number of keys per data block varies.

Default: 131072

io.storefile.bloom.cacheonwrite

Enables cache-on-write for inline blocks of a compound Bloom filter.

Default: false

hbase.rs.cacheblocksonwrite

Whether an HFile block should be added to the block cache when the block is finished.

Default: false

hbase.rpc.engine

Implementation of org.apache.hadoop.hbase.ipc.RpcEngine to be used for client / server RPC call marshalling.

Default: org.apache.hadoop.hbase.ipc.WritableRpcEngine

hbase.master.keytab.file

Full path to the kerberos keytab file to use for logging in the configured HMaster server principal.

Default:

hbase.master.kerberos.principal

Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the HMaster process. The principal name should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the running instance.

Default:

hbase.regionserver.keytab.file

Full path to the kerberos keytab file to use for logging in the configured HRegionServer server principal.

Default:

hbase.regionserver.kerberos.principal

Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the HRegionServer process. The principal name should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the running instance. An entry for this principal must exist in the file specified in hbase.regionserver.keytab.file

Default:

hadoop.policy.file

The policy configuration file used by RPC servers to make authorization decisions on client requests. Only used when HBase security is enabled.

Default: hbase-policy.xml

hbase.superuser

List of users or groups (comma-separated), who are allowed full privileges, regardless of stored ACLs, across the cluster. Only used when HBase security is enabled.

Default:

hbase.auth.key.update.interval

The update interval for master key for authentication tokens in servers in milliseconds. Only used when HBase security is enabled.

Default: 86400000

hbase.auth.token.max.lifetime

The maximum lifetime in milliseconds after which an authentication token expires. Only used when HBase security is enabled.

Default: 604800000

zookeeper.session.timeout

ZooKeeper session timeout. HBase passes this to the zk quorum as suggested maximum time for a session (This setting becomes zookeeper's 'maxSessionTimeout'). See http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions "The client sends a requested timeout, the server responds with the timeout that it can give the client. " In milliseconds.

Default: 180000

zookeeper.znode.parent

Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper files that are configured with a relative path will go under this node. By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed.

Default: /hbase

zookeeper.znode.rootserver

Path to ZNode holding root region location. This is written by the master and read by clients and region servers. If a relative path is given, the parent folder will be ${zookeeper.znode.parent}. By default, this means the root location is stored at /hbase/root-region-server.

Default: root-region-server

zookeeper.znode.acl.parent

Root ZNode for access control lists.

Default: acl

hbase.coprocessor.region.classes

A comma-separated list of Coprocessors that are loaded by default on all tables. For any override coprocessor method, these classes will be called in order. After implementing your own Coprocessor, just put it in HBase's classpath and add the fully qualified class name here. A coprocessor can also be loaded on demand by setting HTableDescriptor.

Default:

hbase.coprocessor.master.classes

A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are loaded by default on the active HMaster process. For any implemented coprocessor methods, the listed classes will be called in order. After implementing your own MasterObserver, just put it in HBase's classpath and add the fully qualified class name here.

Default:

hbase.zookeeper.quorum

Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on.

Default: localhost

hbase.zookeeper.peerport

Port used by ZooKeeper peers to talk to each other. See http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.

Default: 2888

hbase.zookeeper.leaderport

Port used by ZooKeeper for leader election. See http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper for more information.

Default: 3888

hbase.zookeeper.property.initLimit

Property from ZooKeeper's config zoo.cfg. The number of ticks that the initial synchronization phase can take.

Default: 10

hbase.zookeeper.property.syncLimit

Property from ZooKeeper's config zoo.cfg. The number of ticks that can pass between sending a request and getting an acknowledgment.

Default: 5

hbase.zookeeper.property.dataDir

Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored.

Default: ${hbase.tmp.dir}/zookeeper

hbase.zookeeper.property.clientPort

Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.

Default: 2181

hbase.zookeeper.property.maxClientCnxns

Property from ZooKeeper's config zoo.cfg. Limit on number of concurrent connections (at the socket level) that a single client, identified by IP address, may make to a single member of the ZooKeeper ensemble. Set high to avoid zk connection issues running standalone and pseudo-distributed.

Default: 300

hbase.rest.port

The port for the HBase REST server.

Default: 8080

hbase.rest.readonly

Defines the mode the REST server will be started in. Possible values are: false: All HTTP methods are permitted - GET/PUT/POST/DELETE. true: Only the GET method is permitted.

Default: false

hbase.defaults.for.version.skip

Set to true to skip the 'hbase.defaults.for.version' check. Setting this to true can be useful in contexts other than the other side of a maven generation; i.e. running in an ide. You'll want to set this boolean to true to avoid seeing the RuntimException complaint: "hbase-default.xml file seems to be for and old version of HBase (@@@VERSION@@@), this version is X.X.X-SNAPSHOT"

Default: false

hbase.coprocessor.abortonerror

Set to true to cause the hosting server (master or regionserver) to abort if a coprocessor throws a Throwable object that is not IOException or a subclass of IOException. Setting it to true might be useful in development environments where one wants to terminate the server as soon as possible to simplify coprocessor failure analysis.

Default: false

hbase.online.schema.update.enable

Set true to enable online schema changes. This is an experimental feature. There are known issues modifying table schemas at the same time a region split is happening so your table needs to be quiescent or else you have to be running with splits disabled.

Default: false

dfs.support.append

Does HDFS allow appends to files? This is an hdfs config. set in here so the hdfs client will do append support. You must ensure that this config. is true serverside too when running hbase (You will have to restart your cluster after setting it).

Default: true

hbase.offheapcache.percentage

The amount of off heap space to be allocated towards the experimental off heap cache. If you desire the cache to be disabled, simply set this value to 0.

Default: 0

1.6.2. hbase-env.sh

Set HBase environment variables in this file. Examples include options to pass the JVM on start of an HBase daemon such as heap size and garbarge collector configs. You can also set configurations for HBase configuration, log directories, niceness, ssh options, where to locate process pid files, etc. Open the file at conf/hbase-env.sh and peruse its content. Each option is fairly well documented. Add your own environment variables here if you want them read by HBase daemons on startup.

Changes here will require a cluster restart for HBase to notice the change.

1.6.3. log4j.properties

Edit this file to change rate at which HBase files are rolled and to change the level at which HBase logs messages.

Changes here will require a cluster restart for HBase to notice the change though log levels can be changed for particular daemons via the HBase UI.

1.6.4. Client configuration and dependencies connecting to an HBase cluster

Since the HBase Master may move around, clients bootstrap by looking to ZooKeeper for current critical locations. ZooKeeper is where all these values are kept. Thus clients require the location of the ZooKeeper ensemble information before they can do anything else. Usually this the ensemble location is kept out in the hbase-site.xml and is picked up by the client from the CLASSPATH.

If you are configuring an IDE to run a HBase client, you should include the conf/ directory on your classpath so hbase-site.xml settings can be found (or add src/test/resources to pick up the hbase-site.xml used by tests).

Minimally, a client of HBase needs the hbase, hadoop, log4j, commons-logging, commons-lang, and ZooKeeper jars in its CLASSPATH connecting to a cluster.

An example basic hbase-site.xml for client only might look as follows:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>example1,example2,example3</value>
    <description>The directory shared by region servers.
    </description>
  </property>
</configuration>

1.6.4.1. Java client configuration

The configuration used by a Java client is kept in an HBaseConfiguration instance. The factory method on HBaseConfiguration, HBaseConfiguration.create();, on invocation, will read in the content of the first hbase-site.xml found on the client's CLASSPATH, if one is present (Invocation will also factor in any hbase-default.xml found; an hbase-default.xml ships inside the hbase.X.X.X.jar). It is also possible to specify configuration directly without having to read from a hbase-site.xml. For example, to set the ZooKeeper ensemble for the cluster programmatically do as follows:

Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zookeeper locally

If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the hbase-site.xml file). This populated Configuration instance can then be passed to an HTable, and so on.

1.7. Example Configurations

1.7.1. Basic Distributed HBase Install

Here is an example basic configuration for a distributed ten node cluster. The nodes are named example0, example1, etc., through node example9 in this example. The HBase Master and the HDFS namenode are running on the node example0. RegionServers run on nodes example1-example9. A 3-node ZooKeeper ensemble runs on example1, example2, and example3 on the default ports. ZooKeeper data is persisted to the directory /export/zookeeper. Below we show what the main configuration files -- hbase-site.xml, regionservers, and hbase-env.sh -- found in the HBase conf directory might look like.

1.7.1.1. hbase-site.xml


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>example1,example2,example3</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/export/zookeeper</value>
    <description>Property from ZooKeeper's config zoo.cfg.
    The directory where the snapshot is stored.
    </description>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://example0:8020/hbase</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
    </description>
  </property>
</configuration>

    

1.7.1.2. regionservers

In this file you list the nodes that will run RegionServers. In our case we run RegionServers on all but the head node example1 which is carrying the HBase Master and the HDFS namenode

    example1
    example3
    example4
    example5
    example6
    example7
    example8
    example9
    

1.7.1.3. hbase-env.sh

Below we use a diff to show the differences from default in the hbase-env.sh file. Here we are setting the HBase heap to be 4G instead of the default 1G.

    
$ git diff hbase-env.sh
diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh
index e70ebc6..96f8c27 100644
--- a/conf/hbase-env.sh
+++ b/conf/hbase-env.sh
@@ -31,7 +31,7 @@ export JAVA_HOME=/usr/lib//jvm/java-6-sun/
 # export HBASE_CLASSPATH=
 
 # The maximum amount of heap to use, in MB. Default is 1000.
-# export HBASE_HEAPSIZE=1000
+export HBASE_HEAPSIZE=4096
 
 # Extra Java runtime options.
 # Below are what we set by default.  May only work with SUN JVM.

    

Use rsync to copy the content of the conf directory to all nodes of the cluster.

1.8. The Important Configurations

Below we list what the important Configurations. We've divided this section into required configuration and worth-a-look recommended configs.

1.8.1. Required Configurations

Review the Section 1.2, “Operating System” and Section 1.3, “Hadoop” sections.

1.8.2. Recommended Configuations

1.8.2.1. zookeeper.session.timeout

The default timeout is three minutes (specified in milliseconds). This means that if a server crashes, it will be three minutes before the Master notices the crash and starts recovery. You might like to tune the timeout down to a minute or even less so the Master notices failures the sooner. Before changing this value, be sure you have your JVM garbage collection configuration under control otherwise, a long garbage collection that lasts beyond the ZooKeeper session timeout will take out your RegionServer (You might be fine with this -- you probably want recovery to start on the server if a RegionServer has been in GC for a long period of time).

To change this configuration, edit hbase-site.xml, copy the changed file around the cluster and restart.

We set this value high to save our having to field noob questions up on the mailing lists asking why a RegionServer went down during a massive import. The usual cause is that their JVM is untuned and they are running into long GC pauses. Our thinking is that while users are getting familiar with HBase, we'd save them having to know all of its intricacies. Later when they've built some confidence, then they can play with configuration such as this.

1.8.2.2. Number of ZooKeeper Instances

See Section 1.5, “ZooKeeper”.

1.8.2.3. hbase.regionserver.handler.count

This setting defines the number of threads that are kept open to answer incoming requests to user tables. The default of 10 is rather low in order to prevent users from killing their region servers when using large write buffers with a high number of concurrent clients. The rule of thumb is to keep this number low when the payload per request approaches the MB (big puts, scans using a large cache) and high when the payload is small (gets, small puts, ICVs, deletes).

It is safe to set that number to the maximum number of incoming clients if their payload is small, the typical example being a cluster that serves a website since puts aren't typically buffered and most of the operations are gets.

The reason why it is dangerous to keep this setting high is that the aggregate size of all the puts that are currently happening in a region server may impose too much pressure on its memory, or even trigger an OutOfMemoryError. A region server running on low memory will trigger its JVM's garbage collector to run more frequently up to a point where GC pauses become noticeable (the reason being that all the memory used to keep all the requests' payloads cannot be trashed, no matter how hard the garbage collector tries). After some time, the overall cluster throughput is affected since every request that hits that region server will take longer, which exacerbates the problem even more.

1.8.2.4. Configuration for large memory machines

HBase ships with a reasonable, conservative configuration that will work on nearly all machine types that people might want to test with. If you have larger machines -- HBase has 8G and larger heap -- you might the following configuration options helpful. TODO.

1.8.2.5. Compression

You should consider enabling ColumnFamily compression. There are several options that are near-frictionless and in most all cases boost performance by reducing the size of StoreFiles and thus reducing I/O.

See ??? for more information.

1.8.2.6. Bigger Regions

Consider going to larger regions to cut down on the total number of regions on your cluster. Generally less Regions to manage makes for a smoother running cluster (You can always later manually split the big Regions should one prove hot and you want to spread the request load over the cluster). By default, regions are 256MB in size. You could run with 1G. Some run with even larger regions; 4G or even larger. Adjust hbase.hregion.max.filesize in your hbase-site.xml.

1.8.2.7. Managed Splitting

Rather than let HBase auto-split your Regions, manage the splitting manually [12]. With growing amounts of data, splits will continually be needed. Since you always know exactly what regions you have, long-term debugging and profiling is much easier with manual splits. It is hard to trace the logs to understand region level problems if it keeps splitting and getting renamed. Data offlining bugs + unknown number of split regions == oh crap! If an HLog or StoreFile was mistakenly unprocessed by HBase due to a weird bug and you notice it a day or so later, you can be assured that the regions specified in these files are the same as the current regions and you have less headaches trying to restore/replay your data. You can finely tune your compaction algorithm. With roughly uniform data growth, it's easy to cause split / compaction storms as the regions all roughly hit the same data size at the same time. With manual splits, you can let staggered, time-based major compactions spread out your network IO load.

How do I turn off automatic splitting? Automatic splitting is determined by the configuration value hbase.hregion.max.filesize. It is not recommended that you set this to Long.MAX_VALUE in case you forget about manual splits. A suggested setting is 100GB, which would result in > 1hr major compactions if reached.

What's the optimal number of pre-split regions to create? Mileage will vary depending upon your application. You could start low with 10 pre-split regions / server and watch as data grows over time. It's better to err on the side of too little regions and rolling split later. A more complicated answer is that this depends upon the largest storefile in your region. With a growing data size, this will get larger over time. You want the largest region to be just big enough that the Store compact selection algorithm only compacts it due to a timed major. If you don't, your cluster can be prone to compaction storms as the algorithm decides to run major compactions on a large series of regions all at once. Note that compaction storms are due to the uniform data growth, not the manual split decision.

If you pre-split your regions too thin, you can increase the major compaction interval by configuring HConstants.MAJOR_COMPACTION_PERIOD. If your data size grows too large, use the (post-0.90.0 HBase) org.apache.hadoop.hbase.util.RegionSplitter script to perform a network IO safe rolling split of all regions.

1.8.2.8. Managed Compactions

A common administrative technique is to manage major compactions manually, rather than letting HBase do it. By default, HConstants.MAJOR_COMPACTION_PERIOD is one day and major compactions may kick in when you least desire it - especially on a busy system. To "turn off" automatic major compactions set the value to Long.MAX_VALUE.

It is important to stress that major compactions are absolutely necessary for StoreFile cleanup, the only variant is when they occur. They can be administered through the HBase shell, or via HBaseAdmin.

1.8.3. Other Configurations

1.8.3.1. Balancer

The balancer is periodic operation run on the master to redistribute regions on the cluster. It is configured via hbase.balancer.period and defaults to 300000 (5 minutes).

See ??? for more information on the LoadBalancer.

1.9. Bloom Filter Configuration

1.9.1. io.hfile.bloom.enabled global kill switch

io.hfile.bloom.enabled in Configuration serves as the kill switch in case something goes wrong. Default = true.

1.9.2. io.hfile.bloom.error.rate

io.hfile.bloom.error.rate = average false positive rate. Default = 1%. Decrease rate by ½ (e.g. to .5%) == +1 bit per bloom entry.

1.9.3. io.hfile.bloom.max.fold

io.hfile.bloom.max.fold = guaranteed minimum fold rate. Most people should leave this alone. Default = 7, or can collapse to at least 1/128th of original size. See the Development Process section of the document BloomFilters in HBase for more on what this option means.



[1] Be careful editing XML. Make sure you close all elements. Run your file through xmllint or similar to ensure well-formedness of your document after an edit session.

[2] See Jack Levin's major hdfs issues note up on the user list.

[3] The requirement that a database requires upping of system limits is not peculiar to HBase. See for example the section Setting Shell Limits for the Oracle User in Short Guide to install Oracle 10 on Linux.

[4] A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?

[5] Until recently only the branch-0.20-append branch had a working sync but no official release was ever made from this branch. You had to build it yourself. Michael Noll wrote a detailed blog, Building an Hadoop 0.20.x version for HBase 0.90.2, on how to build an Hadoop from branch-0.20-append. Recommended.

[6] Praveen Kumar has written a complimentary article, Building Hadoop and HBase for HBase Maven application development.

[7] dfs.support.append

[8] See Hadoop HDFS: Deceived by Xciever for an informative rant on xceivering.

[9] The pseudo-distributed vs fully-distributed nomenclature comes from Hadoop.

[10] See Pseudo-distributed mode extras for notes on how to start extra Masters and RegionServers when running pseudo-distributed.

[11] For the full list of ZooKeeper configurations, see ZooKeeper's zoo.cfg. HBase does not ship with a zoo.cfg so you will need to browse the conf directory in an appropriate ZooKeeper download.

[12] What follows is taken from the javadoc at the head of the org.apache.hadoop.hbase.util.RegionSplitter tool added to HBase post-0.90.0 release.