|
||||||||||
| PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES | |||||||||
See:
Description
| Interface Summary | |
|---|---|
| ReplicationListener | The replication listener interface can be implemented if a class needs to subscribe to events generated by the ReplicationTracker. |
| ReplicationPeers | This provides an interface for maintaining a set of peer clusters. |
| ReplicationQueues | This provides an interface for maintaining a region server's replication queues. |
| ReplicationQueuesClient | This provides an interface for clients of replication to view replication queues. |
| ReplicationTracker | This is the interface for a Replication Tracker. |
| Class Summary | |
|---|---|
| ReplicationFactory | A factory class for instantiating replication objects that deal with replication state. |
| ReplicationPeer | This class acts as a wrapper for all the objects used to identify and communicate with remote peers and is responsible for answering to expired sessions and re-establishing the ZK connections. |
| ReplicationPeersZKImpl | This class provides an implementation of the ReplicationPeers interface using Zookeeper. |
| ReplicationPeersZKImpl.PeerRegionServerListener | Tracks changes to the list of region servers in a peer's cluster. |
| ReplicationQueueInfo | This class is responsible for the parsing logic for a znode representing a queue. |
| ReplicationQueuesClientZKImpl | |
| ReplicationQueuesZKImpl | This class provides an implementation of the ReplicationQueues interface using Zookeeper. |
| ReplicationStateZKBase | This is a base class for maintaining replication state in zookeeper. |
| ReplicationTrackerZKImpl | This class is a Zookeeper implementation of the ReplicationTracker interface. |
This package is experimental quality software and is only meant to be a base for future developments. The current implementation offers the following features:
Before trying out replication, make sure to review the following requirements:
The following steps describe how to enable replication from a cluster to another.
<property> <name>hbase.replication</name> <value>true</value> </property>deploy the files, and then restart HBase if it was running.
add_peerThis will show you the help to setup the replication stream between both clusters. If both clusters use the same Zookeeper cluster, you have to use a different zookeeper.znode.parent since they can't write in the same folder.
disable 'your_table'
alter 'your_table', {NAME => 'family_name', REPLICATION_SCOPE => '1'}
enable 'your_table'
Currently, a scope of 0 (default) means that it won't be replicated and a
scope of 1 means it's going to be. In the future, different scope can be
used for routing policies.
list_peers(as of version 0.92)
Considering 1 rs, with ratio 0.1 Getting 1 rs from peer cluster # 0 Choosing peer 10.10.1.49:62020In this case it indicates that 1 region server from the slave cluster was chosen for replication.
Verifying the replicated data on two clusters is easy to do in the shell when looking only at a few rows, but doing a systematic comparison requires more computing power. This is why the VerifyReplication MR job was created, it has to be run on the master cluster and needs to be provided with a peer id (the one provided when establishing a replication stream) and a table name. Other options let you specify a time range and specific families. This job's short name is "verifyrep" and needs to be provided when pointing "hadoop jar" to the hbase jar.
|
||||||||||
| PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES | |||||||||