|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.hbase.regionserver.HStore
@InterfaceAudience.Private public class HStore
A Store holds a column family in a Region. Its a memstore and a set of zero or more StoreFiles, which stretch backwards over time.
There's no reason to consider append-logging at this level; all logging and locking is handled at the HRegion level. Store just provides services to manage sets of StoreFiles. One of the most important of those services is compaction services where files are aggregated once they pass a configurable threshold.
The only thing having to do with logs that Store needs to deal with is the reconstructionLog. This is a segment of an HRegion's log that might NOT be present upon startup. If the param is NULL, there's nothing to do. If the param is non-NULL, we need to process the log to reconstruct a TreeMap that might not have been written to disk before the process died.
It's assumed that after this constructor returns, the reconstructionLog file will be deleted (by whoever has instantiated the Store).
Locking and transactions are handled at a higher level. This API should not be called directly but by an HRegion manager.
| Field Summary | |
|---|---|
static String |
BLOCKING_STOREFILES_KEY
|
static String |
COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY
|
static long |
DEEP_OVERHEAD
|
static int |
DEFAULT_BLOCKING_STOREFILE_COUNT
|
static int |
DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER
|
static long |
FIXED_OVERHEAD
|
protected MemStore |
memstore
|
| Fields inherited from interface org.apache.hadoop.hbase.regionserver.Store |
|---|
NO_PRIORITY, PRIORITY_USER |
| Constructor Summary | |
|---|---|
protected |
HStore(HRegion region,
HColumnDescriptor family,
org.apache.hadoop.conf.Configuration confParam)
Constructor |
| Method Summary | |
|---|---|
long |
add(KeyValue kv)
Adds a value to the memstore |
void |
addChangedReaderObserver(ChangedReadersObserver o)
|
boolean |
areWritesEnabled()
|
void |
assertBulkLoadHFileOk(org.apache.hadoop.fs.Path srcPath)
This throws a WrongRegionException if the HFile does not fit in this region, or an InvalidHFileException if the HFile is not valid. |
void |
bulkLoadHFile(String srcPathStr,
long seqNum)
This method should only be called from HRegion. |
void |
cancelRequestedCompaction(CompactionContext compaction)
|
boolean |
canSplit()
|
com.google.common.collect.ImmutableCollection<StoreFile> |
close()
Close all the readers We don't need to worry about subsequent requests because the HRegion holds a write lock that will prevent any more reads or writes. |
List<StoreFile> |
compact(CompactionContext compaction)
Compact the StoreFiles. |
void |
compactRecentForTestingAssumingDefaultPolicy(int N)
This method tries to compact N recent files for testing. |
protected void |
completeCompaction(Collection<StoreFile> compactedFiles)
|
void |
completeCompactionMarker(WALProtos.CompactionDescriptor compaction)
Call to complete a compaction. |
org.apache.hadoop.hbase.regionserver.StoreFlushContext |
createFlushContext(long cacheFlushId)
|
StoreFile.Writer |
createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint)
|
protected long |
delete(KeyValue kv)
Adds a value to the memstore |
void |
deleteChangedReaderObserver(ChangedReadersObserver o)
|
protected List<org.apache.hadoop.fs.Path> |
flushCache(long logCacheFlushId,
SortedSet<KeyValue> snapshot,
TimeRangeTracker snapshotTimeRangeTracker,
AtomicLong flushedSize,
MonitoredTask status)
Write out current snapshot. |
long |
getBlockingFileCount()
The number of files required before flushes for this store will be blocked. |
static int |
getBytesPerChecksum(org.apache.hadoop.conf.Configuration conf)
Returns the configured bytesPerChecksum value. |
CacheConfig |
getCacheConfig()
Used for tests. |
static ChecksumType |
getChecksumType(org.apache.hadoop.conf.Configuration conf)
Returns the configured checksum algorithm. |
static int |
getCloseCheckInterval()
|
String |
getColumnFamilyName()
|
long |
getCompactionCheckMultiplier()
|
CompactionProgress |
getCompactionProgress()
getter for CompactionProgress object |
int |
getCompactPriority()
|
KeyValue.KVComparator |
getComparator()
|
RegionCoprocessorHost |
getCoprocessorHost()
|
HFileDataBlockEncoder |
getDataBlockEncoder()
|
HColumnDescriptor |
getFamily()
|
org.apache.hadoop.fs.FileSystem |
getFileSystem()
|
HRegion |
getHRegion()
|
long |
getLastCompactSize()
|
long |
getMaxMemstoreTS()
|
long |
getMemstoreFlushSize()
TODO: remove after HBASE-7252 is fixed. |
long |
getMemStoreSize()
|
HRegionFileSystem |
getRegionFileSystem()
|
HRegionInfo |
getRegionInfo()
|
KeyValue |
getRowKeyAtOrBefore(byte[] row)
Find the key that matches row exactly, or the one that immediately precedes it. |
ScanInfo |
getScanInfo()
|
KeyValueScanner |
getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
Return a scanner for both the memstore and the HStore files. |
List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean isGet,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow)
Get all scanners with no filtering based on TTL (that happens further down the line). |
long |
getSize()
|
long |
getSmallestReadPoint()
|
byte[] |
getSplitPoint()
Determines if Store should be split |
Collection<StoreFile> |
getStorefiles()
|
int |
getStorefilesCount()
|
long |
getStorefilesIndexSize()
|
long |
getStorefilesSize()
|
long |
getStoreFileTtl()
|
static org.apache.hadoop.fs.Path |
getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
HRegionInfo hri,
byte[] family)
Deprecated. |
static org.apache.hadoop.fs.Path |
getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
String encodedName,
byte[] family)
Deprecated. |
long |
getStoreSizeUncompressed()
|
TableName |
getTableName()
|
long |
getTotalStaticBloomSize()
Returns the total byte size of all Bloom filter bit arrays. |
long |
getTotalStaticIndexSize()
Returns the total size of all index blocks in the data block indexes, including the root level, intermediate levels, and the leaf level for multi-level indexes, or just the root level for single-level indexes. |
boolean |
hasReferences()
|
boolean |
hasTooManyStoreFiles()
|
long |
heapSize()
|
boolean |
isMajorCompaction()
|
boolean |
needsCompaction()
See if there's too much store files in this store |
CompactionContext |
requestCompaction()
|
CompactionContext |
requestCompaction(int priority,
CompactionRequest baseRequest)
|
void |
rollback(KeyValue kv)
Removes a kv from the memstore. |
boolean |
throttleCompaction(long compactionSize)
|
long |
timeOfOldestEdit()
When was the last edit done in the memstore |
String |
toString()
|
void |
triggerMajorCompaction()
|
long |
updateColumnValue(byte[] row,
byte[] f,
byte[] qualifier,
long newValue)
Used in tests. |
long |
upsert(Iterable<Cell> cells,
long readpoint)
Adds or replaces the specified KeyValues. |
| Methods inherited from class java.lang.Object |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait |
| Field Detail |
|---|
public static final String COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY
public static final String BLOCKING_STOREFILES_KEY
public static final int DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER
public static final int DEFAULT_BLOCKING_STOREFILE_COUNT
protected final MemStore memstore
public static final long FIXED_OVERHEAD
public static final long DEEP_OVERHEAD
| Constructor Detail |
|---|
protected HStore(HRegion region,
HColumnDescriptor family,
org.apache.hadoop.conf.Configuration confParam)
throws IOException
region - family - HColumnDescriptor for this columnconfParam - configuration object
failed. Can be null.
IOException| Method Detail |
|---|
public String getColumnFamilyName()
getColumnFamilyName in interface Storepublic TableName getTableName()
getTableName in interface Storepublic org.apache.hadoop.fs.FileSystem getFileSystem()
getFileSystem in interface Storepublic HRegionFileSystem getRegionFileSystem()
public long getStoreFileTtl()
getStoreFileTtl in interface StoreConfigInformationpublic long getMemstoreFlushSize()
StoreConfigInformation
getMemstoreFlushSize in interface StoreConfigInformationpublic long getCompactionCheckMultiplier()
getCompactionCheckMultiplier in interface StoreConfigInformationpublic long getBlockingFileCount()
StoreConfigInformation
getBlockingFileCount in interface StoreConfigInformationpublic static int getBytesPerChecksum(org.apache.hadoop.conf.Configuration conf)
conf - The configuration
public static ChecksumType getChecksumType(org.apache.hadoop.conf.Configuration conf)
conf - The configuration
public static int getCloseCheckInterval()
public HColumnDescriptor getFamily()
getFamily in interface Storepublic long getMaxMemstoreTS()
getMaxMemstoreTS in interface Store
@Deprecated
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
HRegionInfo hri,
byte[] family)
tabledir - Path to where the table is being storedhri - HRegionInfo for the region.family - HColumnDescriptor describing the column family
@Deprecated
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
String encodedName,
byte[] family)
tabledir - Path to where the table is being storedencodedName - Encoded region name.family - HColumnDescriptor describing the column family
public HFileDataBlockEncoder getDataBlockEncoder()
getDataBlockEncoder in interface Storepublic long add(KeyValue kv)
Store
add in interface Storepublic long timeOfOldestEdit()
Store
timeOfOldestEdit in interface Storeprotected long delete(KeyValue kv)
kv -
public void rollback(KeyValue kv)
Store
rollback in interface Storepublic Collection<StoreFile> getStorefiles()
getStorefiles in interface Store
public void assertBulkLoadHFileOk(org.apache.hadoop.fs.Path srcPath)
throws IOException
Store
assertBulkLoadHFileOk in interface StoreIOException
public void bulkLoadHFile(String srcPathStr,
long seqNum)
throws IOException
Store
bulkLoadHFile in interface StoreseqNum - sequence Id associated with the HFile
IOException
public com.google.common.collect.ImmutableCollection<StoreFile> close()
throws IOException
Store
close in interface StoreStoreFiles that were previously being used.
IOException - on failure
protected List<org.apache.hadoop.fs.Path> flushCache(long logCacheFlushId,
SortedSet<KeyValue> snapshot,
TimeRangeTracker snapshotTimeRangeTracker,
AtomicLong flushedSize,
MonitoredTask status)
throws IOException
snapshot() has been called
previously.
logCacheFlushId - flush sequence numbersnapshot - snapshotTimeRangeTracker - flushedSize - The number of bytes flushedstatus -
IOException
public StoreFile.Writer createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint)
throws IOException
createWriterInTmp in interface StoreIOException
public List<KeyValueScanner> getScanners(boolean cacheBlocks,
boolean isGet,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow)
throws IOException
getScanners in interface StoreIOExceptionpublic void addChangedReaderObserver(ChangedReadersObserver o)
addChangedReaderObserver in interface Storepublic void deleteChangedReaderObserver(ChangedReadersObserver o)
deleteChangedReaderObserver in interface Store
public List<StoreFile> compact(CompactionContext compaction)
throws IOException
During this time, the Store can work as usual, getting values from StoreFiles and writing new StoreFiles from the memstore. Existing StoreFiles are not destroyed until the new compacted StoreFile is completely written-out to disk.
The compactLock prevents multiple simultaneous compactions. The structureLock prevents us from interfering with other write operations.
We don't want to hold the structureLock for the whole time, as a compact() can be lengthy and we want to allow cache-flushes during this period.
Compaction event should be idempotent, since there is no IO Fencing for the region directory in hdfs. A region server might still try to complete the compaction after it lost the region. That is why the following events are carefully ordered for a compaction: 1. Compaction writes new files under region/.tmp directory (compaction output) 2. Compaction atomically moves the temporary file under region directory 3. Compaction appends a WAL edit containing the compaction input and output files. Forces sync on WAL. 4. Compaction deletes the input files from the region directory. Failure conditions are handled like this: - If RS fails before 2, compaction wont complete. Even if RS lives on and finishes the compaction later, it will only write the new data file to the region directory. Since we already have this data, this will be idempotent but we will have a redundant copy of the data. - If RS fails between 2 and 3, the region will have a redundant copy of the data. The RS that failed won't be able to finish snyc() for WAL because of lease recovery in WAL. - If RS fails after 3, the region region server who opens the region will pick up the the compaction marker from the WAL and replay it by removing the compaction input files. Failed RS can also attempt to delete those files, but the operation will be idempotent See HBASE-2231 for details.
compact in interface Storecompaction - compaction details obtained from requestCompaction()
IOException
public void completeCompactionMarker(WALProtos.CompactionDescriptor compaction)
throws IOException
completeCompactionMarker in interface Storecompaction -
IOException
public void compactRecentForTestingAssumingDefaultPolicy(int N)
throws IOException
N - Number of files.
IOExceptionpublic boolean hasReferences()
hasReferences in interface Storepublic CompactionProgress getCompactionProgress()
Store
getCompactionProgress in interface Store
public boolean isMajorCompaction()
throws IOException
isMajorCompaction in interface StoreIOException
public CompactionContext requestCompaction()
throws IOException
requestCompaction in interface StoreIOException
public CompactionContext requestCompaction(int priority,
CompactionRequest baseRequest)
throws IOException
requestCompaction in interface StoreIOExceptionpublic void cancelRequestedCompaction(CompactionContext compaction)
cancelRequestedCompaction in interface Store
protected void completeCompaction(Collection<StoreFile> compactedFiles)
throws IOException
IOException
public KeyValue getRowKeyAtOrBefore(byte[] row)
throws IOException
Store
getRowKeyAtOrBefore in interface Storerow - The row key of the targeted row.
IOExceptionpublic boolean canSplit()
canSplit in interface Storepublic byte[] getSplitPoint()
Store
getSplitPoint in interface Storepublic long getLastCompactSize()
getLastCompactSize in interface Storepublic long getSize()
getSize in interface Storepublic void triggerMajorCompaction()
triggerMajorCompaction in interface Store
public KeyValueScanner getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
throws IOException
Store
getScanner in interface Storescan - Scan to apply when scanning the storestargetCols - columns to scan
IOException - on failurepublic String toString()
toString in class Objectpublic int getStorefilesCount()
getStorefilesCount in interface Storepublic long getStoreSizeUncompressed()
getStoreSizeUncompressed in interface Storepublic long getStorefilesSize()
getStorefilesSize in interface Storepublic long getStorefilesIndexSize()
getStorefilesIndexSize in interface Storepublic long getTotalStaticIndexSize()
Store
getTotalStaticIndexSize in interface Storepublic long getTotalStaticBloomSize()
Store
getTotalStaticBloomSize in interface Storepublic long getMemStoreSize()
getMemStoreSize in interface Storepublic int getCompactPriority()
getCompactPriority in interface Storepublic boolean throttleCompaction(long compactionSize)
throttleCompaction in interface Storepublic HRegion getHRegion()
public RegionCoprocessorHost getCoprocessorHost()
getCoprocessorHost in interface Storepublic HRegionInfo getRegionInfo()
getRegionInfo in interface Storepublic boolean areWritesEnabled()
areWritesEnabled in interface Storepublic long getSmallestReadPoint()
getSmallestReadPoint in interface Store
public long updateColumnValue(byte[] row,
byte[] f,
byte[] qualifier,
long newValue)
throws IOException
row - row to updatef - family to updatequalifier - qualifier to updatenewValue - the new value to set into memstore
IOException
public long upsert(Iterable<Cell> cells,
long readpoint)
throws IOException
StoreFor each KeyValue specified, if a cell with the same row, family, and qualifier exists in MemStore, it will be replaced. Otherwise, it will just be inserted to MemStore.
This operation is atomic on each KeyValue (row/family/qualifier) but not necessarily atomic across all of them.
upsert in interface Storereadpoint - readpoint below which we can safely remove duplicate KVs
IOExceptionpublic org.apache.hadoop.hbase.regionserver.StoreFlushContext createFlushContext(long cacheFlushId)
createFlushContext in interface Storepublic boolean needsCompaction()
Store
needsCompaction in interface Storepublic CacheConfig getCacheConfig()
Store
getCacheConfig in interface Storepublic long heapSize()
heapSize in interface HeapSizepublic KeyValue.KVComparator getComparator()
getComparator in interface Storepublic ScanInfo getScanInfo()
getScanInfo in interface Storepublic boolean hasTooManyStoreFiles()
hasTooManyStoreFiles in interface Store
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||