|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.hbase.client.HTable
@InterfaceAudience.Public @InterfaceStability.Stable public class HTable
Used to communicate with a single HBase table.
This class is not thread safe for reads nor write.
In case of writes (Put, Delete), the underlying write buffer can be corrupted if multiple threads contend over a single HTable instance.
In case of reads, some fields used by a Scan are shared among all threads. The HTable implementation can either not contract to be safe in case of a Get
To access a table in a multi threaded environment, please consider
using the HTablePool class to create your HTable instances.
Instances of HTable passed the same Configuration instance will
share connections to servers out on the cluster and to the zookeeper ensemble
as well as caches of region locations. This is usually a *good* thing and it
is recommended to reuse the same configuration object for all your tables.
This happens because they will all share the same underlying
HConnection instance. See HConnectionManager for more on
how this mechanism works.
HConnection will read most of the
configuration it needs from the passed Configuration on initial
construction. Thereafter, for settings such as
hbase.client.pause, hbase.client.retries.number,
and hbase.client.rpc.maxattempts updating their values in the
passed Configuration subsequent to HConnection construction
will go unnoticed. To run with changed values, make a new
HTable passing a new Configuration instance that has the
new configuration.
Note that this class implements the Closeable interface. When a
HTable instance is no longer required, it *should* be closed in order to ensure
that the underlying resources are promptly released. Please note that the close
method can throw java.io.IOException that must be handled.
for create, drop, list, enable and disable of tables.,
HConnection,
HConnectionManager| Field Summary | |
|---|---|
protected org.apache.hadoop.hbase.client.AsyncProcess<Object> |
ap
The Async process for puts with autoflush set to false or multiputs |
protected HConnection |
connection
|
protected long |
currentWriteBufferSize
|
protected int |
scannerCaching
|
protected List<Row> |
writeAsyncBuffer
|
| Constructor Summary | |
|---|---|
protected |
HTable()
For internal testing. |
|
HTable(byte[] tableName,
HConnection connection,
ExecutorService pool)
Creates an object to access a HBase table. |
|
HTable(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
Creates an object to access a HBase table. |
|
HTable(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
ExecutorService pool)
Creates an object to access a HBase table. |
|
HTable(org.apache.hadoop.conf.Configuration conf,
String tableName)
Creates an object to access a HBase table. |
|
HTable(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
Creates an object to access a HBase table. |
|
HTable(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
ExecutorService pool)
Creates an object to access a HBase table. |
|
HTable(TableName tableName,
HConnection connection,
ExecutorService pool)
Creates an object to access a HBase table. |
| Method Summary | ||
|---|---|---|
Result |
append(Append append)
Appends values to one or more columns within a single row. |
|
Object[] |
batch(List<? extends Row> actions)
Same as HTableInterface.batch(List, Object[]), but returns an array of
results instead of using a results parameter reference. |
|
void |
batch(List<? extends Row> actions,
Object[] results)
Method that does a batch call on Deletes, Gets, Puts, Increments, Appends and RowMutations. |
|
|
batchCallback(List<? extends Row> actions,
Batch.Callback<R> callback)
Same as HTableInterface.batch(List), but with a callback. |
|
|
batchCallback(List<? extends Row> actions,
Object[] results,
Batch.Callback<R> callback)
Same as HTableInterface.batch(List, Object[]), but with a callback. |
|
boolean |
checkAndDelete(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
Delete delete)
Atomically checks if a row/family/qualifier value matches the expected value. |
|
boolean |
checkAndPut(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
Put put)
Atomically checks if a row/family/qualifier value matches the expected value. |
|
void |
clearRegionCache()
Explicitly clears the region cache to fetch the latest value from META. |
|
void |
close()
Releases any resources held or pending changes in internal buffers. |
|
CoprocessorRpcChannel |
coprocessorService(byte[] row)
Creates and returns a RpcChannel instance connected to the
table region containing the specified row. |
|
|
coprocessorService(Class<T> service,
byte[] startKey,
byte[] endKey,
Batch.Call<T,R> callable)
Creates an instance of the given Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance. |
|
|
coprocessorService(Class<T> service,
byte[] startKey,
byte[] endKey,
Batch.Call<T,R> callable,
Batch.Callback<R> callback)
Creates an instance of the given Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service instance. |
|
void |
delete(Delete delete)
Deletes the specified cells/row. |
|
void |
delete(List<Delete> deletes)
Deletes the specified cells/rows in bulk. |
|
boolean |
exists(Get get)
Test for the existence of columns in the table, as specified by the Get. |
|
Boolean[] |
exists(List<Get> gets)
Test for the existence of columns in the table, as specified by the Gets. |
|
void |
flushCommits()
Executes all the buffered Put operations. |
|
Result |
get(Get get)
Extracts certain cells from a given row. |
|
Result[] |
get(List<Get> gets)
Extracts certain cells from the given rows, in batch. |
|
org.apache.hadoop.conf.Configuration |
getConfiguration()
Returns the Configuration object used by this instance. |
|
HConnection |
getConnection()
Deprecated. This method will be changed from public to package protected. |
|
byte[][] |
getEndKeys()
Gets the ending row key for every region in the currently open table. |
|
TableName |
getName()
Gets the fully qualified table name instance of this table. |
|
int |
getOperationTimeout()
|
|
static boolean |
getRegionCachePrefetch(byte[] tableName)
Check whether region cache prefetch is enabled or not for the table. |
|
static boolean |
getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
Check whether region cache prefetch is enabled or not for the table. |
|
static boolean |
getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
|
|
static boolean |
getRegionCachePrefetch(TableName tableName)
|
|
HRegionLocation |
getRegionLocation(byte[] row)
Finds the region on which the given row is being served. |
|
HRegionLocation |
getRegionLocation(byte[] row,
boolean reload)
Finds the region on which the given row is being served. |
|
HRegionLocation |
getRegionLocation(String row)
Find region location hosting passed row using cached info |
|
NavigableMap<HRegionInfo,ServerName> |
getRegionLocations()
Gets all the regions and their address for this table. |
|
List<HRegionLocation> |
getRegionsInRange(byte[] startKey,
byte[] endKey)
Get the corresponding regions for an arbitrary range of keys. |
|
List<HRegionLocation> |
getRegionsInRange(byte[] startKey,
byte[] endKey,
boolean reload)
Get the corresponding regions for an arbitrary range of keys. |
|
Result |
getRowOrBefore(byte[] row,
byte[] family)
Return the row that matches row exactly, or the one that immediately precedes it. |
|
ResultScanner |
getScanner(byte[] family)
Gets a scanner on the current table for the given family. |
|
ResultScanner |
getScanner(byte[] family,
byte[] qualifier)
Gets a scanner on the current table for the given family and qualifier. |
|
ResultScanner |
getScanner(Scan scan)
Returns a scanner on the current table as specified by the Scan
object. |
|
int |
getScannerCaching()
Deprecated. Use Scan.setCaching(int) and Scan.getCaching() |
|
Pair<byte[][],byte[][]> |
getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table. |
|
byte[][] |
getStartKeys()
Gets the starting row key for every region in the currently open table. |
|
HTableDescriptor |
getTableDescriptor()
Gets the table descriptor for this table. |
|
byte[] |
getTableName()
Gets the name of this table. |
|
List<Row> |
getWriteBuffer()
Deprecated. since 0.96. This is an internal buffer that should not be read nor write. |
|
long |
getWriteBufferSize()
Returns the maximum size in bytes of the write buffer for this HTable. |
|
Result |
increment(Increment increment)
Increments one or more columns within a single row. |
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount)
See HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability) |
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
Durability durability)
Atomically increments a column value. |
|
boolean |
isAutoFlush()
Tells whether or not 'auto-flush' is turned on. |
|
static boolean |
isTableEnabled(byte[] tableName)
Deprecated. use HBaseAdmin.isTableEnabled(byte[]) |
|
static boolean |
isTableEnabled(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
Deprecated. use HBaseAdmin.isTableEnabled(byte[]) |
|
static boolean |
isTableEnabled(org.apache.hadoop.conf.Configuration conf,
String tableName)
Deprecated. use HBaseAdmin.isTableEnabled(byte[]) |
|
static boolean |
isTableEnabled(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
Deprecated. use HBaseAdmin.isTableEnabled(org.apache.hadoop.hbase.TableName tableName) |
|
static boolean |
isTableEnabled(String tableName)
Deprecated. use HBaseAdmin.isTableEnabled(byte[]) |
|
static boolean |
isTableEnabled(TableName tableName)
Deprecated. use HBaseAdmin.isTableEnabled(byte[]) |
|
void |
mutateRow(RowMutations rm)
Performs multiple mutations atomically on a single row. |
|
void |
processBatch(List<? extends Row> list,
Object[] results)
Parameterized batch processing, allowing varying return types for different Row implementations. |
|
|
processBatchCallback(List<? extends Row> list,
Object[] results,
Batch.Callback<R> callback)
Process a mixed batch of Get, Put and Delete actions. |
|
void |
put(List<Put> puts)
Puts some data in the table, in batch. |
|
void |
put(Put put)
Puts some data in the table. |
|
void |
setAutoFlush(boolean autoFlush)
See setAutoFlush(boolean, boolean) |
|
void |
setAutoFlush(boolean autoFlush,
boolean clearBufferOnFail)
Turns 'auto-flush' on or off. |
|
void |
setOperationTimeout(int operationTimeout)
|
|
static void |
setRegionCachePrefetch(byte[] tableName,
boolean enable)
Enable or disable region cache prefetch for the table. |
|
static void |
setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
boolean enable)
Enable or disable region cache prefetch for the table. |
|
static void |
setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
boolean enable)
|
|
static void |
setRegionCachePrefetch(TableName tableName,
boolean enable)
|
|
void |
setScannerCaching(int scannerCaching)
Deprecated. Use Scan.setCaching(int) |
|
void |
setWriteBufferSize(long writeBufferSize)
Sets the size of the buffer in bytes. |
|
void |
validatePut(Put put)
|
|
| Methods inherited from class java.lang.Object |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
| Field Detail |
|---|
protected HConnection connection
protected List<Row> writeAsyncBuffer
protected long currentWriteBufferSize
protected int scannerCaching
protected org.apache.hadoop.hbase.client.AsyncProcess<Object> ap
| Constructor Detail |
|---|
public HTable(org.apache.hadoop.conf.Configuration conf,
String tableName)
throws IOException
conf instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf instance. Recommended.
conf - Configuration object to use.tableName - Name of the table.
IOException - if a remote or network exception occurs
public HTable(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
throws IOException
conf instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf instance. Recommended.
conf - Configuration object to use.tableName - Name of the table.
IOException - if a remote or network exception occurs
public HTable(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
throws IOException
conf instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf instance. Recommended.
conf - Configuration object to use.tableName - table name pojo
IOException - if a remote or network exception occurs
public HTable(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
ExecutorService pool)
throws IOException
conf instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf instance.
Use this constructor when the ExecutorService is externally managed.
conf - Configuration object to use.tableName - Name of the table.pool - ExecutorService to be used.
IOException - if a remote or network exception occurs
public HTable(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
ExecutorService pool)
throws IOException
conf instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf instance.
Use this constructor when the ExecutorService is externally managed.
conf - Configuration object to use.tableName - Name of the table.pool - ExecutorService to be used.
IOException - if a remote or network exception occurs
public HTable(byte[] tableName,
HConnection connection,
ExecutorService pool)
throws IOException
connection instance.
Use this constructor when the ExecutorService and HConnection instance are
externally managed.
tableName - Name of the table.connection - HConnection to be used.pool - ExecutorService to be used.
IOException - if a remote or network exception occurs
public HTable(TableName tableName,
HConnection connection,
ExecutorService pool)
throws IOException
connection instance.
Use this constructor when the ExecutorService and HConnection instance are
externally managed.
tableName - Name of the table.connection - HConnection to be used.pool - ExecutorService to be used.
IOException - if a remote or network exception occursprotected HTable()
| Method Detail |
|---|
public org.apache.hadoop.conf.Configuration getConfiguration()
Configuration object used by this instance.
The reference returned is not a copy, so any change made to it will affect this instance.
getConfiguration in interface HTableInterface
@Deprecated
public static boolean isTableEnabled(String tableName)
throws IOException
HBaseAdmin.isTableEnabled(byte[])
tableName - Name of table to check.
true if table is online.
IOException - if a remote or network exception occurs
@Deprecated
public static boolean isTableEnabled(byte[] tableName)
throws IOException
HBaseAdmin.isTableEnabled(byte[])
tableName - Name of table to check.
true if table is online.
IOException - if a remote or network exception occurs
@Deprecated
public static boolean isTableEnabled(TableName tableName)
throws IOException
HBaseAdmin.isTableEnabled(byte[])
tableName - Name of table to check.
true if table is online.
IOException - if a remote or network exception occurs
@Deprecated
public static boolean isTableEnabled(org.apache.hadoop.conf.Configuration conf,
String tableName)
throws IOException
HBaseAdmin.isTableEnabled(byte[])
conf - The Configuration object to use.tableName - Name of table to check.
true if table is online.
IOException - if a remote or network exception occurs
@Deprecated
public static boolean isTableEnabled(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
throws IOException
HBaseAdmin.isTableEnabled(byte[])
conf - The Configuration object to use.tableName - Name of table to check.
true if table is online.
IOException - if a remote or network exception occurs
@Deprecated
public static boolean isTableEnabled(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
throws IOException
HBaseAdmin.isTableEnabled(org.apache.hadoop.hbase.TableName tableName)
conf - The Configuration object to use.tableName - Name of table to check.
true if table is online.
IOException - if a remote or network exception occurs
public HRegionLocation getRegionLocation(String row)
throws IOException
row - Row to find.
IOException - if a remote or network exception occurs
public HRegionLocation getRegionLocation(byte[] row)
throws IOException
row - Row to find.
IOException - if a remote or network exception occurs
public HRegionLocation getRegionLocation(byte[] row,
boolean reload)
throws IOException
row - Row to find.reload - true to reload information or false to use cached information
IOException - if a remote or network exception occurspublic byte[] getTableName()
getTableName in interface HTableInterfacepublic TableName getName()
HTableInterface
getName in interface HTableInterface@Deprecated public HConnection getConnection()
@Deprecated public int getScannerCaching()
Scan.setCaching(int) and Scan.getCaching()
The default value comes from hbase.client.scanner.caching.
@Deprecated public List<Row> getWriteBuffer()
@Deprecated public void setScannerCaching(int scannerCaching)
Scan.setCaching(int)
This will override the value specified by
hbase.client.scanner.caching.
Increasing this value will reduce the amount of work needed each time
next() is called on a scanner, at the expense of memory use
(since more rows will need to be maintained in memory by the scanners).
scannerCaching - the number of rows a scanner will fetch at once.
public HTableDescriptor getTableDescriptor()
throws IOException
table descriptor for this table.
getTableDescriptor in interface HTableInterfaceIOException - if a remote or network exception occurs.
public byte[][] getStartKeys()
throws IOException
This is mainly useful for the MapReduce integration.
IOException - if a remote or network exception occurs
public byte[][] getEndKeys()
throws IOException
This is mainly useful for the MapReduce integration.
IOException - if a remote or network exception occurs
public Pair<byte[][],byte[][]> getStartEndKeys()
throws IOException
This is mainly useful for the MapReduce integration.
IOException - if a remote or network exception occurs
public NavigableMap<HRegionInfo,ServerName> getRegionLocations()
throws IOException
This is mainly useful for the MapReduce integration.
IOException - if a remote or network exception occurs
public List<HRegionLocation> getRegionsInRange(byte[] startKey,
byte[] endKey)
throws IOException
startKey - Starting row in range, inclusiveendKey - Ending row in range, exclusive
IOException - if a remote or network exception occurs
public List<HRegionLocation> getRegionsInRange(byte[] startKey,
byte[] endKey,
boolean reload)
throws IOException
startKey - Starting row in range, inclusiveendKey - Ending row in range, exclusivereload - true to reload information or false to use cached information
IOException - if a remote or network exception occurs
public Result getRowOrBefore(byte[] row,
byte[] family)
throws IOException
getRowOrBefore in interface HTableInterfacerow - A row key.family - Column family to include in the Result.
IOException - if a remote or network exception occurs.
public ResultScanner getScanner(Scan scan)
throws IOException
Scan
object.
Note that the passed Scan's start row and caching properties
maybe changed.
getScanner in interface HTableInterfacescan - A configured Scan object.
IOException - if a remote or network exception occurs.
public ResultScanner getScanner(byte[] family)
throws IOException
getScanner in interface HTableInterfacefamily - The column family to scan.
IOException - if a remote or network exception occurs.
public ResultScanner getScanner(byte[] family,
byte[] qualifier)
throws IOException
getScanner in interface HTableInterfacefamily - The column family to scan.qualifier - The column qualifier to scan.
IOException - if a remote or network exception occurs.
public Result get(Get get)
throws IOException
get in interface HTableInterfaceget - The object that specifies what data to fetch and from which row.
Result instance returned won't
contain any KeyValue, as indicated by Result.isEmpty().
IOException - if a remote or network exception occurs.
public Result[] get(List<Get> gets)
throws IOException
get in interface HTableInterfacegets - The objects that specify what data to fetch and from which rows.
Result instance returned won't
contain any KeyValue, as indicated by Result.isEmpty().
If there are any failures even after retries, there will be a null in
the results array for those Gets, AND an exception will be thrown.
IOException - if a remote or network exception occurs.
public void batch(List<? extends Row> actions,
Object[] results)
throws InterruptedException,
IOException
HTableInterfaceHTableInterface.batch(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[]) call, you will not necessarily be
guaranteed that the Get returns what the Put had put.
batch in interface HTableInterfaceactions - list of Get, Put, Delete, Increment, Append, RowMutations objectsresults - Empty Object[], same size as actions. Provides access to partial
results, in case an exception is thrown. A null in the result array means that
the call for that action failed, even after retries
IOException
InterruptedException
public Object[] batch(List<? extends Row> actions)
throws InterruptedException,
IOException
HTableInterfaceHTableInterface.batch(List, Object[]), but returns an array of
results instead of using a results parameter reference.
batch in interface HTableInterfaceactions - list of Get, Put, Delete, Increment, Append, RowMutations objects
IOException
InterruptedException
public <R> void batchCallback(List<? extends Row> actions,
Object[] results,
Batch.Callback<R> callback)
throws IOException,
InterruptedException
HTableInterfaceHTableInterface.batch(List, Object[]), but with a callback.
batchCallback in interface HTableInterfaceIOException
InterruptedException
public <R> Object[] batchCallback(List<? extends Row> actions,
Batch.Callback<R> callback)
throws IOException,
InterruptedException
HTableInterfaceHTableInterface.batch(List), but with a callback.
batchCallback in interface HTableInterfaceIOException
InterruptedException
public void delete(Delete delete)
throws IOException
delete in interface HTableInterfacedelete - The object that specifies what to delete.
IOException - if a remote or network exception occurs.
public void delete(List<Delete> deletes)
throws IOException
delete in interface HTableInterfacedeletes - List of things to delete. List gets modified by this
method (in particular it gets re-ordered, so the order in which the elements
are inserted in the list gives no guarantee as to the order in which the
Deletes are executed).
IOException - if a remote or network exception occurs. In that case
the deletes argument will contain the Delete instances
that have not be successfully applied.
public void put(Put put)
throws InterruptedIOException,
RetriesExhaustedWithDetailsException
If isAutoFlush is false, the update is buffered
until the internal buffer is full.
put in interface HTableInterfaceput - The data to put.
InterruptedIOException
RetriesExhaustedWithDetailsException
public void put(List<Put> puts)
throws InterruptedIOException,
RetriesExhaustedWithDetailsException
If isAutoFlush is false, the update is buffered
until the internal buffer is full.
This can be used for group commit, or for submitting user defined batches. The writeBuffer will be periodically inspected while the List is processed, so depending on the List size the writeBuffer may flush not at all, or more than once.
put in interface HTableInterfaceputs - The list of mutations to apply. The batch put is done by
aggregating the iteration of the Puts over the write buffer
at the client-side for a single RPC call.
InterruptedIOException
RetriesExhaustedWithDetailsException
public void mutateRow(RowMutations rm)
throws IOException
Put and Delete are supported.
mutateRow in interface HTableInterfacerm - object that specifies the set of mutations to perform atomically
IOException
public Result append(Append append)
throws IOException
This operation does not appear atomic to readers. Appends are done under a single row lock, so write operations to a row are synchronized, but readers do not take row locks so get and scan operations can see this operation partially completed.
append in interface HTableInterfaceappend - object that specifies the columns and amounts to be used
for the increment operations
IOException - e
public Result increment(Increment increment)
throws IOException
This operation does not appear atomic to readers. Increments are done under a single row lock, so write operations to a row are synchronized, but readers do not take row locks so get and scan operations can see this operation partially completed.
increment in interface HTableInterfaceincrement - object that specifies the columns and amounts to be used
for the increment operations
IOException - e
public long incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount)
throws IOException
HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
The Durability is defaulted to Durability.SYNC_WAL.
incrementColumnValue in interface HTableInterfacerow - The row that contains the cell to increment.family - The column family of the cell to increment.qualifier - The column qualifier of the cell to increment.amount - The amount to increment the cell with (or decrement, if the
amount is negative).
IOException - if a remote or network exception occurs.
public long incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
Durability durability)
throws IOException
amount and
written to the specified column.
Setting durability to Durability.SKIP_WAL means that in a fail
scenario you will lose any increments that have not been flushed.
incrementColumnValue in interface HTableInterfacerow - The row that contains the cell to increment.family - The column family of the cell to increment.qualifier - The column qualifier of the cell to increment.amount - The amount to increment the cell with (or decrement, if the
amount is negative).durability - The persistence guarantee for this increment.
IOException - if a remote or network exception occurs.
public boolean checkAndPut(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
Put put)
throws IOException
checkAndPut in interface HTableInterfacerow - to checkfamily - column family to checkqualifier - column qualifier to checkvalue - the expected valueput - data to put if check succeeds
IOException - e
public boolean checkAndDelete(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
Delete delete)
throws IOException
checkAndDelete in interface HTableInterfacerow - to checkfamily - column family to checkqualifier - column qualifier to checkvalue - the expected valuedelete - data to delete if check succeeds
IOException - e
public boolean exists(Get get)
throws IOException
This will return true if the Get matches one or more keys, false if not.
This is a server-side call so it prevents any data from being transfered to the client.
exists in interface HTableInterfaceget - the Get
IOException - e
public Boolean[] exists(List<Get> gets)
throws IOException
This will return an array of booleans. Each value will be true if the related Get matches one or more keys, false if not.
This is a server-side call so it prevents any data from being transfered to the client.
exists in interface HTableInterfacegets - the Gets
IOException - e
public void flushCommits()
throws InterruptedIOException,
RetriesExhaustedWithDetailsException
Put operations.
This method gets called once automatically for every Put or batch
of Puts (when put(List is used) when
HTableInterface.isAutoFlush() is true.
flushCommits in interface HTableInterfaceInterruptedIOException
RetriesExhaustedWithDetailsException
public <R> void processBatchCallback(List<? extends Row> list,
Object[] results,
Batch.Callback<R> callback)
throws IOException,
InterruptedException
list - The collection of actions.results - An empty array, same size as list. If an exception is thrown,
you can test here for partial results, and to determine which actions
processed successfully.
IOException - if there are problems talking to META. Per-item
exceptions are stored in the results array.
InterruptedException
public void processBatch(List<? extends Row> list,
Object[] results)
throws IOException,
InterruptedException
Row implementations.
IOException
InterruptedException
public void close()
throws IOException
HTableInterface
close in interface Closeableclose in interface HTableInterfaceIOException - if a remote or network exception occurs.
public void validatePut(Put put)
throws IllegalArgumentException
IllegalArgumentExceptionpublic boolean isAutoFlush()
isAutoFlush in interface HTableInterfacetrue if 'auto-flush' is enabled (default), meaning
Put operations don't get buffered/delayed and are immediately
executed.public void setAutoFlush(boolean autoFlush)
setAutoFlush(boolean, boolean)
setAutoFlush in interface HTableInterfaceautoFlush - Whether or not to enable 'auto-flush'.
public void setAutoFlush(boolean autoFlush,
boolean clearBufferOnFail)
When enabled (default), Put operations don't get buffered/delayed
and are immediately executed. Failed operations are not retried. This is
slower but safer.
Turning off autoFlush means that multiple Puts will be
accepted before any RPC is actually sent to do the write operations. If the
application dies before pending writes get flushed to HBase, data will be
lost.
When you turn autoFlush off, you should also consider the
clearBufferOnFail option. By default, asynchronous Put
requests will be retried on failure until successful. However, this can
pollute the writeBuffer and slow down batching performance. Additionally,
you may want to issue a number of Put requests and call
flushCommits() as a barrier. In both use cases, consider setting
clearBufferOnFail to true to erase the buffer after flushCommits()
has been called, regardless of success.
setAutoFlush in interface HTableInterfaceautoFlush - Whether or not to enable 'auto-flush'.clearBufferOnFail - Whether to keep Put failures in the writeBufferflushCommits()public long getWriteBufferSize()
The default value comes from the configuration parameter
hbase.client.write.buffer.
getWriteBufferSize in interface HTableInterface
public void setWriteBufferSize(long writeBufferSize)
throws IOException
If the new size is less than the current amount of data in the write buffer, the buffer gets flushed.
setWriteBufferSize in interface HTableInterfacewriteBufferSize - The new write buffer size, in bytes.
IOException - if a remote or network exception occurs.
public static void setRegionCachePrefetch(byte[] tableName,
boolean enable)
throws IOException
tableName - name of table to configure.enable - Set to true to enable region cache prefetch. Or set to
false to disable it.
IOException
public static void setRegionCachePrefetch(TableName tableName,
boolean enable)
throws IOException
IOException
public static void setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
boolean enable)
throws IOException
conf - The Configuration object to use.tableName - name of table to configure.enable - Set to true to enable region cache prefetch. Or set to
false to disable it.
IOException
public static void setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
boolean enable)
throws IOException
IOException
public static boolean getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
throws IOException
conf - The Configuration object to use.tableName - name of table to check
IOException
public static boolean getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
throws IOException
IOException
public static boolean getRegionCachePrefetch(byte[] tableName)
throws IOException
tableName - name of table to check
IOException
public static boolean getRegionCachePrefetch(TableName tableName)
throws IOException
IOExceptionpublic void clearRegionCache()
public CoprocessorRpcChannel coprocessorService(byte[] row)
RpcChannel instance connected to the
table region containing the specified row. The row given does not actually have
to exist. Whichever region would contain the row based on start and end keys will
be used. Note that the row parameter is also not passed to the
coprocessor handler registered for this protocol, unless the row
is separately passed as an argument in the service request. The parameter
here is only used to locate the region used to handle the call.
The obtained RpcChannel instance can be used to access a published
coprocessor Service using standard protobuf service invocations:
CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey);
MyService.BlockingInterface service = MyService.newBlockingStub(channel);
MyCallRequest request = MyCallRequest.newBuilder()
...
.build();
MyCallResponse response = service.myCall(null, request);
coprocessorService in interface HTableInterfacerow - The row key used to identify the remote region location
public <T extends com.google.protobuf.Service,R> Map<byte[],R> coprocessorService(Class<T> service,
byte[] startKey,
byte[] endKey,
Batch.Call<T,R> callable)
throws com.google.protobuf.ServiceException,
Throwable
Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.
coprocessorService in interface HTableInterfaceT - the Service subclass to connect toR - Return type for the callable parameter's
Batch.Call.call(T) methodservice - the protocol buffer Service implementation to callstartKey - start region selection with region containing this row. If null, the
selection will start with the first table region.endKey - select regions up to and including the region containing this row.
If null, selection will continue through the last table region.callable - this instance's
Batch.Call.call(T)
method will be invoked once per table region, using the Service
instance connected to that region.
com.google.protobuf.ServiceException
Throwable
public <T extends com.google.protobuf.Service,R> void coprocessorService(Class<T> service,
byte[] startKey,
byte[] endKey,
Batch.Call<T,R> callable,
Batch.Callback<R> callback)
throws com.google.protobuf.ServiceException,
Throwable
Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service instance.
The given
Batch.Callback.update(byte[], byte[], Object)
method will be called with the return value from each region's
Batch.Call.call(T) invocation.
coprocessorService in interface HTableInterfaceT - the Service subclass to connect toR - Return type for the callable parameter's
Batch.Call.call(T) methodservice - the protocol buffer Service implementation to callstartKey - start region selection with region containing this row. If null, the
selection will start with the first table region.endKey - select regions up to and including the region containing this row.
If null, selection will continue through the last table region.callable - this instance's
Batch.Call.call(T) method
will be invoked once per table region, using the Service instance
connected to that region.
com.google.protobuf.ServiceException
Throwablepublic void setOperationTimeout(int operationTimeout)
public int getOperationTimeout()
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||