|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.hbase.regionserver.StoreFile
public class StoreFile
A Store data file. Stores usually have one or more of these files. They
are produced by flushing the memstore to disk. To
create, call createWriter(FileSystem, Path, int, Configuration, CacheConfig)
and append data. Be sure to add any metadata before calling close on the
Writer (Use the appendMetadata convenience methods). On close, a StoreFile
is sitting in the Filesystem. To refer to it, create a StoreFile instance
passing filesystem and path. To read, call createReader()
.
StoreFiles may also reference store files in another Store. The reason for this weird pattern where you use a different instance for the writer and a reader is that we write once but read a lot more.
Nested Class Summary | |
---|---|
static class |
StoreFile.BloomType
|
static class |
StoreFile.Reader
Reader for a StoreFile. |
static class |
StoreFile.Writer
A StoreFile writer. |
Field Summary | |
---|---|
static byte[] |
BULKLOAD_TASK_KEY
Meta key set when store file is a result of a bulk load |
static byte[] |
BULKLOAD_TIME_KEY
|
static int |
DEFAULT_BLOCKSIZE_SMALL
|
static byte[] |
MAJOR_COMPACTION_KEY
Major compaction flag in FileInfo |
static byte[] |
MAX_SEQ_ID_KEY
Max Sequence ID in FileInfo |
static byte[] |
TIMERANGE_KEY
Key for Timerange information in metadata |
Method Summary | |
---|---|
void |
closeReader(boolean evictOnClose)
|
static HDFSBlocksDistribution |
computeHDFSBlockDistribution(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path p)
helper function to compute HDFS blocks distribution of a given file. |
StoreFile.Reader |
createReader()
|
static StoreFile.Writer |
createWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
int blocksize,
Compression.Algorithm algorithm,
KeyValue.KVComparator c,
org.apache.hadoop.conf.Configuration conf,
CacheConfig cacheConf,
StoreFile.BloomType bloomType,
long maxKeyCount)
Create a store file writer. |
static StoreFile.Writer |
createWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
int blocksize,
org.apache.hadoop.conf.Configuration conf,
CacheConfig cacheConf)
Get a store file writer. |
void |
deleteReader()
Delete this file |
long |
getBulkLoadTimestamp()
Return the timestamp at which this bulk load file was generated. |
HDFSBlocksDistribution |
getHDFSBlockDistribution()
|
long |
getMaxMemstoreTS()
|
static long |
getMaxMemstoreTSInList(Collection<StoreFile> sfs)
Return the largest memstoreTS found across all storefiles in the given list. |
long |
getMaxSequenceId()
|
static long |
getMaxSequenceIdInList(Collection<StoreFile> sfs)
Return the highest sequence ID found across all storefiles in the given list. |
long |
getModificationTimeStamp()
|
StoreFile.Reader |
getReader()
|
static org.apache.hadoop.fs.Path |
getUniqueFile(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir)
|
static boolean |
isReference(org.apache.hadoop.fs.Path p)
|
static boolean |
isReference(org.apache.hadoop.fs.Path p,
Matcher m)
|
static org.apache.hadoop.fs.Path |
rename(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path tgt)
Utility to help with rename. |
void |
setMaxMemstoreTS(long maxMemstoreTS)
|
String |
toString()
|
String |
toStringDetailed()
|
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final byte[] MAX_SEQ_ID_KEY
public static final byte[] MAJOR_COMPACTION_KEY
public static final byte[] TIMERANGE_KEY
public static final int DEFAULT_BLOCKSIZE_SMALL
public static final byte[] BULKLOAD_TASK_KEY
public static final byte[] BULKLOAD_TIME_KEY
Method Detail |
---|
public long getMaxMemstoreTS()
public void setMaxMemstoreTS(long maxMemstoreTS)
public static boolean isReference(org.apache.hadoop.fs.Path p)
p
- Path to check.
public static boolean isReference(org.apache.hadoop.fs.Path p, Matcher m)
p
- Path to check.m
- Matcher to use.
public long getMaxSequenceId()
public long getModificationTimeStamp()
public static long getMaxMemstoreTSInList(Collection<StoreFile> sfs)
public static long getMaxSequenceIdInList(Collection<StoreFile> sfs)
public long getBulkLoadTimestamp()
public HDFSBlocksDistribution getHDFSBlockDistribution()
public static HDFSBlocksDistribution computeHDFSBlockDistribution(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path p) throws IOException
fs
- The FileSystemp
- The path of the file
IOException
public StoreFile.Reader createReader() throws IOException
IOException
public StoreFile.Reader getReader()
createReader()
public void closeReader(boolean evictOnClose) throws IOException
evictOnClose
-
IOException
public void deleteReader() throws IOException
IOException
public String toString()
toString
in class Object
public String toStringDetailed()
public static org.apache.hadoop.fs.Path rename(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path tgt) throws IOException
fs
- src
- tgt
-
IOException
public static StoreFile.Writer createWriter(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, int blocksize, org.apache.hadoop.conf.Configuration conf, CacheConfig cacheConf) throws IOException
fs
- dir
- Path to family directory. Makes the directory if doesn't exist.
Creates a file with a unique name in this directory.blocksize
- size per filesystem block
IOException
public static StoreFile.Writer createWriter(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, int blocksize, Compression.Algorithm algorithm, KeyValue.KVComparator c, org.apache.hadoop.conf.Configuration conf, CacheConfig cacheConf, StoreFile.BloomType bloomType, long maxKeyCount) throws IOException
fs
- dir
- Path to family directory. Makes the directory if doesn't exist.
Creates a file with a unique name in this directory.blocksize
- algorithm
- Pass null to get default.c
- Pass null to get default.conf
- HBase system configuration. used with bloom filterscacheConf
- Cache configuration and reference.bloomType
- column family setting for bloom filtersmaxKeyCount
- estimated maximum number of keys we expect to add
IOException
public static org.apache.hadoop.fs.Path getUniqueFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
fs
- dir
- Directory to create file in.
dir
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |