org.apache.hadoop.hbase.regionserver.wal
Class HLog

java.lang.Object
  extended by org.apache.hadoop.hbase.regionserver.wal.HLog
All Implemented Interfaces:
org.apache.hadoop.fs.Syncable

public class HLog
extends Object
implements org.apache.hadoop.fs.Syncable

HLog stores all the edits to the HStore. Its the hbase write-ahead-log implementation. It performs logfile-rolling, so external callers are not aware that the underlying file is being rolled.

There is one HLog per RegionServer. All edits for all Regions carried by a particular RegionServer are entered first in the HLog.

Each HRegion is identified by a unique long int. HRegions do not need to declare themselves before using the HLog; they simply include their HRegion-id in the append or completeCacheFlush calls.

An HLog consists of multiple on-disk files, which have a chronological order. As data is flushed to other (better) on-disk structures, the log becomes obsolete. We can destroy all the log messages for a given HRegion-id up to the most-recent CACHEFLUSH message from that HRegion.

It's only practical to delete entire files. Thus, we delete an entire on-disk file F when all of the messages in F have a log-sequence-id that's older (smaller) than the most-recent CACHEFLUSH message for every HRegion that has a message in F.

Synchronized methods can never execute in parallel. However, between the start of a cache flush and the completion point, appends are allowed but log rolling is not. To prevent log rolling taking place during this period, a separate reentrant lock is used.

To read an HLog, call getReader(org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration).


Nested Class Summary
static class HLog.Entry
          Utility class that lets us keep track of the edit with it's key Only used when splitting logs
static interface HLog.Reader
           
static interface HLog.Writer
           
 
Field Summary
static long FIXED_OVERHEAD
           
static byte[] METAFAMILY
           
static boolean SPLIT_SKIP_ERRORS_DEFAULT
           
static String SPLITTING_EXT
          File Extension used while splitting an HLog into regions (HBASE-2312)
 
Constructor Summary
HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.Path oldLogDir, org.apache.hadoop.conf.Configuration conf)
          Constructor.
HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.Path oldLogDir, org.apache.hadoop.conf.Configuration conf, List<WALActionsListener> listeners, boolean failIfLogDirExists, String prefix)
          Create an edit log at the given dir location.
HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.Path oldLogDir, org.apache.hadoop.conf.Configuration conf, List<WALActionsListener> listeners, String prefix)
          Create an edit log at the given dir location.
 
Method Summary
 void abortCacheFlush(byte[] encodedRegionName)
          Abort a cache flush.
 void append(HRegionInfo info, byte[] tableName, WALEdit edits, long now, HTableDescriptor htd)
          Only used in tests.
 void append(HRegionInfo info, byte[] tableName, WALEdit edits, UUID clusterId, long now, HTableDescriptor htd)
          Append a set of edits to the log.
 void append(HRegionInfo regionInfo, HLogKey logKey, WALEdit logEdit, HTableDescriptor htd)
          Append an entry to the log.
 void close()
          Shut down the log.
 void closeAndDelete()
          Shut down the log and delete the log directory
 void completeCacheFlush(byte[] encodedRegionName, byte[] tableName, long logSeqId, boolean isMetaRegion)
          Complete the cache flush Protected by cacheFlushLock
protected  org.apache.hadoop.fs.Path computeFilename()
          This is a convenience method that computes a new filename with a given using the current HLog file-number
protected  org.apache.hadoop.fs.Path computeFilename(long filenum)
          This is a convenience method that computes a new filename with a given file-number.
static HLog.Writer createWriter(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf)
          Get a writer for the WAL.
protected  HLog.Writer createWriterInstance(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf)
          This method allows subclasses to inject different writers without having to extend other methods like rollWriter().
protected  void doWrite(HRegionInfo info, HLogKey logKey, WALEdit logEdit, HTableDescriptor htd)
           
 WALCoprocessorHost getCoprocessorHost()
           
protected  org.apache.hadoop.fs.Path getDir()
          Get the directory we are making logs in.
 long getFilenum()
           
static String getHLogDirectoryName(String serverName)
          Construct the HLog directory name
static Class<? extends HLogKey> getKeyClass(org.apache.hadoop.conf.Configuration conf)
           
static HLog.Reader getReader(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf)
          Get a reader for the WAL.
static org.apache.hadoop.fs.Path getRegionDirRecoveredEditsDir(org.apache.hadoop.fs.Path regiondir)
           
 long getSequenceNumber()
           
static NavigableSet<org.apache.hadoop.fs.Path> getSplitEditFilesSorted(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path regiondir)
          Returns sorted set of edit files made by wal-log splitter, excluding files with '.temp' suffix.
static long getSyncOps()
           
static long getSyncTime()
           
static long getWriteOps()
           
static long getWriteTime()
           
 void hflush()
           
 void hsync()
           
 boolean isLowReplicationRollEnabled()
          Get LowReplication-Roller status
static boolean isMetaFamily(byte[] family)
           
static void main(String[] args)
          Pass one or more log file names and it will either dump out a text version on stdout or split the specified log files.
protected  HLogKey makeKey(byte[] regionName, byte[] tableName, long seqnum, long now, UUID clusterId)
           
static org.apache.hadoop.fs.Path moveAsideBadEditsFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path edits)
          Move aside a bad edits file.
static HLogKey newKey(org.apache.hadoop.conf.Configuration conf)
           
 void registerWALActionsListener(WALActionsListener listener)
           
 byte[][] rollWriter()
          Roll the log writer.
 byte[][] rollWriter(boolean force)
          Roll the log writer.
 void setSequenceNumber(long newvalue)
          Called by HRegionServer when it opens a new region to ensure that log sequence numbers are always greater than the latest sequence number of the region being brought on-line.
 long startCacheFlush(byte[] encodedRegionName)
          By acquiring a log sequence ID, we can allow log messages to continue while we flush the cache.
 void sync()
           
 boolean unregisterWALActionsListener(WALActionsListener listener)
           
static boolean validateHLogFilename(String filename)
           
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

METAFAMILY

public static final byte[] METAFAMILY

SPLITTING_EXT

public static final String SPLITTING_EXT
File Extension used while splitting an HLog into regions (HBASE-2312)

See Also:
Constant Field Values

SPLIT_SKIP_ERRORS_DEFAULT

public static final boolean SPLIT_SKIP_ERRORS_DEFAULT
See Also:
Constant Field Values

FIXED_OVERHEAD

public static final long FIXED_OVERHEAD
Constructor Detail

HLog

public HLog(org.apache.hadoop.fs.FileSystem fs,
            org.apache.hadoop.fs.Path dir,
            org.apache.hadoop.fs.Path oldLogDir,
            org.apache.hadoop.conf.Configuration conf)
     throws IOException
Constructor.

Parameters:
fs - filesystem handle
dir - path to where hlogs are stored
oldLogDir - path to where hlogs are archived
conf - configuration to use
Throws:
IOException

HLog

public HLog(org.apache.hadoop.fs.FileSystem fs,
            org.apache.hadoop.fs.Path dir,
            org.apache.hadoop.fs.Path oldLogDir,
            org.apache.hadoop.conf.Configuration conf,
            List<WALActionsListener> listeners,
            String prefix)
     throws IOException
Create an edit log at the given dir location. You should never have to load an existing log. If there is a log at startup, it should have already been processed and deleted by the time the HLog object is started up.

Parameters:
fs - filesystem handle
dir - path to where hlogs are stored
oldLogDir - path to where hlogs are archived
conf - configuration to use
listeners - Listeners on WAL events. Listeners passed here will be registered before we do anything else; e.g. the Constructor rollWriter().
prefix - should always be hostname and port in distributed env and it will be URL encoded before being used. If prefix is null, "hlog" will be used
Throws:
IOException

HLog

public HLog(org.apache.hadoop.fs.FileSystem fs,
            org.apache.hadoop.fs.Path dir,
            org.apache.hadoop.fs.Path oldLogDir,
            org.apache.hadoop.conf.Configuration conf,
            List<WALActionsListener> listeners,
            boolean failIfLogDirExists,
            String prefix)
     throws IOException
Create an edit log at the given dir location. You should never have to load an existing log. If there is a log at startup, it should have already been processed and deleted by the time the HLog object is started up.

Parameters:
fs - filesystem handle
dir - path to where hlogs are stored
oldLogDir - path to where hlogs are archived
conf - configuration to use
listeners - Listeners on WAL events. Listeners passed here will be registered before we do anything else; e.g. the Constructor rollWriter().
failIfLogDirExists - If true IOException will be thrown if dir already exists.
prefix - should always be hostname and port in distributed env and it will be URL encoded before being used. If prefix is null, "hlog" will be used
Throws:
IOException
Method Detail

getWriteOps

public static long getWriteOps()

getWriteTime

public static long getWriteTime()

getSyncOps

public static long getSyncOps()

getSyncTime

public static long getSyncTime()

registerWALActionsListener

public void registerWALActionsListener(WALActionsListener listener)

unregisterWALActionsListener

public boolean unregisterWALActionsListener(WALActionsListener listener)

getFilenum

public long getFilenum()
Returns:
Current state of the monotonically increasing file id.

setSequenceNumber

public void setSequenceNumber(long newvalue)
Called by HRegionServer when it opens a new region to ensure that log sequence numbers are always greater than the latest sequence number of the region being brought on-line.

Parameters:
newvalue - We'll set log edit/sequence number to this value if it is greater than the current value.

getSequenceNumber

public long getSequenceNumber()
Returns:
log sequence number

rollWriter

public byte[][] rollWriter()
                    throws FailedLogCloseException,
                           IOException
Roll the log writer. That is, start writing log messages to a new file. Because a log cannot be rolled during a cache flush, and a cache flush spans two method calls, a special lock needs to be obtained so that a cache flush cannot start when the log is being rolled and the log cannot be rolled during a cache flush.

Note that this method cannot be synchronized because it is possible that startCacheFlush runs, obtaining the cacheFlushLock, then this method could start which would obtain the lock on this but block on obtaining the cacheFlushLock and then completeCacheFlush could be called which would wait for the lock on this and consequently never release the cacheFlushLock

Returns:
If lots of logs, flush the returned regions so next time through we can clean logs. Returns null if nothing to flush. Names are actual region names as returned by HRegionInfo.getEncodedName()
Throws:
FailedLogCloseException
IOException

rollWriter

public byte[][] rollWriter(boolean force)
                    throws FailedLogCloseException,
                           IOException
Roll the log writer. That is, start writing log messages to a new file. Because a log cannot be rolled during a cache flush, and a cache flush spans two method calls, a special lock needs to be obtained so that a cache flush cannot start when the log is being rolled and the log cannot be rolled during a cache flush.

Note that this method cannot be synchronized because it is possible that startCacheFlush runs, obtaining the cacheFlushLock, then this method could start which would obtain the lock on this but block on obtaining the cacheFlushLock and then completeCacheFlush could be called which would wait for the lock on this and consequently never release the cacheFlushLock

Parameters:
force - If true, force creation of a new writer even if no entries have been written to the current writer
Returns:
If lots of logs, flush the returned regions so next time through we can clean logs. Returns null if nothing to flush. Names are actual region names as returned by HRegionInfo.getEncodedName()
Throws:
FailedLogCloseException
IOException

createWriterInstance

protected HLog.Writer createWriterInstance(org.apache.hadoop.fs.FileSystem fs,
                                           org.apache.hadoop.fs.Path path,
                                           org.apache.hadoop.conf.Configuration conf)
                                    throws IOException
This method allows subclasses to inject different writers without having to extend other methods like rollWriter().

Parameters:
fs -
path -
conf -
Returns:
Writer instance
Throws:
IOException

getReader

public static HLog.Reader getReader(org.apache.hadoop.fs.FileSystem fs,
                                    org.apache.hadoop.fs.Path path,
                                    org.apache.hadoop.conf.Configuration conf)
                             throws IOException
Get a reader for the WAL.

Parameters:
fs -
path -
conf -
Returns:
A WAL reader. Close when done with it.
Throws:
IOException

createWriter

public static HLog.Writer createWriter(org.apache.hadoop.fs.FileSystem fs,
                                       org.apache.hadoop.fs.Path path,
                                       org.apache.hadoop.conf.Configuration conf)
                                throws IOException
Get a writer for the WAL.

Parameters:
path -
conf -
Returns:
A WAL writer. Close when done with it.
Throws:
IOException

computeFilename

protected org.apache.hadoop.fs.Path computeFilename()
This is a convenience method that computes a new filename with a given using the current HLog file-number

Returns:
Path

computeFilename

protected org.apache.hadoop.fs.Path computeFilename(long filenum)
This is a convenience method that computes a new filename with a given file-number.

Parameters:
filenum - to use
Returns:
Path

closeAndDelete

public void closeAndDelete()
                    throws IOException
Shut down the log and delete the log directory

Throws:
IOException

close

public void close()
           throws IOException
Shut down the log.

Throws:
IOException

makeKey

protected HLogKey makeKey(byte[] regionName,
                          byte[] tableName,
                          long seqnum,
                          long now,
                          UUID clusterId)
Parameters:
now -
regionName -
tableName -
clusterId -
Returns:
New log key.

append

public void append(HRegionInfo regionInfo,
                   HLogKey logKey,
                   WALEdit logEdit,
                   HTableDescriptor htd)
            throws IOException
Append an entry to the log.

Parameters:
regionInfo -
logEdit -
logKey -
Throws:
IOException

append

public void append(HRegionInfo info,
                   byte[] tableName,
                   WALEdit edits,
                   long now,
                   HTableDescriptor htd)
            throws IOException
Only used in tests.

Parameters:
info -
tableName -
edits -
now -
htd -
Throws:
IOException

append

public void append(HRegionInfo info,
                   byte[] tableName,
                   WALEdit edits,
                   UUID clusterId,
                   long now,
                   HTableDescriptor htd)
            throws IOException
Append a set of edits to the log. Log edits are keyed by (encoded) regionName, rowname, and log-sequence-id. Later, if we sort by these keys, we obtain all the relevant edits for a given key-range of the HRegion (TODO). Any edits that do not have a matching COMPLETE_CACHEFLUSH message can be discarded.

Logs cannot be restarted once closed, or once the HLog process dies. Each time the HLog starts, it must create a new log. This means that other systems should process the log appropriately upon each startup (and prior to initializing HLog). synchronized prevents appends during the completion of a cache flush or for the duration of a log roll.

Parameters:
info -
tableName -
edits -
clusterId - The originating clusterId for this edit (for replication)
now -
Throws:
IOException

hsync

public void hsync()
           throws IOException
Throws:
IOException

hflush

public void hflush()
            throws IOException
Throws:
IOException

sync

public void sync()
          throws IOException
Specified by:
sync in interface org.apache.hadoop.fs.Syncable
Throws:
IOException

doWrite

protected void doWrite(HRegionInfo info,
                       HLogKey logKey,
                       WALEdit logEdit,
                       HTableDescriptor htd)
                throws IOException
Throws:
IOException

startCacheFlush

public long startCacheFlush(byte[] encodedRegionName)
By acquiring a log sequence ID, we can allow log messages to continue while we flush the cache. Acquire a lock so that we do not roll the log between the start and completion of a cache-flush. Otherwise the log-seq-id for the flush will not appear in the correct logfile. Ensuring that flushes and log-rolls don't happen concurrently also allows us to temporarily put a log-seq-number in lastSeqWritten against the region being flushed that might not be the earliest in-memory log-seq-number for that region. By the time the flush is completed or aborted and before the cacheFlushLock is released it is ensured that lastSeqWritten again has the oldest in-memory edit's lsn for the region that was being flushed. In this method, by removing the entry in lastSeqWritten for the region being flushed we ensure that the next edit inserted in this region will be correctly recorded in append(HRegionInfo, byte[], WALEdit, long, HTableDescriptor) The lsn of the earliest in-memory lsn - which is now in the memstore snapshot - is saved temporarily in the lastSeqWritten map while the flush is active.

Returns:
sequence ID to pass completeCacheFlush(byte[], byte[], long, boolean) (byte[], byte[], long)}
See Also:
completeCacheFlush(byte[], byte[], long, boolean), abortCacheFlush(byte[])

completeCacheFlush

public void completeCacheFlush(byte[] encodedRegionName,
                               byte[] tableName,
                               long logSeqId,
                               boolean isMetaRegion)
                        throws IOException
Complete the cache flush Protected by cacheFlushLock

Parameters:
encodedRegionName -
tableName -
logSeqId -
Throws:
IOException

abortCacheFlush

public void abortCacheFlush(byte[] encodedRegionName)
Abort a cache flush. Call if the flush fails. Note that the only recovery for an aborted flush currently is a restart of the regionserver so the snapshot content dropped by the failure gets restored to the memstore.


isMetaFamily

public static boolean isMetaFamily(byte[] family)
Parameters:
family -
Returns:
true if the column is a meta column

isLowReplicationRollEnabled

public boolean isLowReplicationRollEnabled()
Get LowReplication-Roller status

Returns:
lowReplicationRollEnabled

getKeyClass

public static Class<? extends HLogKey> getKeyClass(org.apache.hadoop.conf.Configuration conf)

newKey

public static HLogKey newKey(org.apache.hadoop.conf.Configuration conf)
                      throws IOException
Throws:
IOException

getHLogDirectoryName

public static String getHLogDirectoryName(String serverName)
Construct the HLog directory name

Parameters:
serverName - Server name formatted as described in ServerName
Returns:
the HLog directory name

getDir

protected org.apache.hadoop.fs.Path getDir()
Get the directory we are making logs in.

Returns:
dir

validateHLogFilename

public static boolean validateHLogFilename(String filename)

getSplitEditFilesSorted

public static NavigableSet<org.apache.hadoop.fs.Path> getSplitEditFilesSorted(org.apache.hadoop.fs.FileSystem fs,
                                                                              org.apache.hadoop.fs.Path regiondir)
                                                                       throws IOException
Returns sorted set of edit files made by wal-log splitter, excluding files with '.temp' suffix.

Parameters:
fs -
regiondir -
Returns:
Files in passed regiondir as a sorted set.
Throws:
IOException

moveAsideBadEditsFile

public static org.apache.hadoop.fs.Path moveAsideBadEditsFile(org.apache.hadoop.fs.FileSystem fs,
                                                              org.apache.hadoop.fs.Path edits)
                                                       throws IOException
Move aside a bad edits file.

Parameters:
fs -
edits - Edits file to move aside.
Returns:
The name of the moved aside file.
Throws:
IOException

getRegionDirRecoveredEditsDir

public static org.apache.hadoop.fs.Path getRegionDirRecoveredEditsDir(org.apache.hadoop.fs.Path regiondir)
Parameters:
regiondir - This regions directory in the filesystem.
Returns:
The directory that holds recovered edits files for the region regiondir

getCoprocessorHost

public WALCoprocessorHost getCoprocessorHost()
Returns:
Coprocessor host.

main

public static void main(String[] args)
                 throws IOException
Pass one or more log file names and it will either dump out a text version on stdout or split the specified log files.

Parameters:
args -
Throws:
IOException


Copyright © 2012 The Apache Software Foundation. All Rights Reserved.