Uses of Class
org.apache.hadoop.hbase.client.Scan

Packages that use Scan
org.apache.hadoop.hbase.avro Provides an HBase Avro service. 
org.apache.hadoop.hbase.catalog   
org.apache.hadoop.hbase.client Provides HBase Client 
org.apache.hadoop.hbase.client.coprocessor Provides client classes for invoking Coprocessor RPC protocols 
org.apache.hadoop.hbase.coprocessor Table of Contents 
org.apache.hadoop.hbase.ipc Tools to help define network clients and servers. 
org.apache.hadoop.hbase.mapreduce Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility 
org.apache.hadoop.hbase.regionserver   
org.apache.hadoop.hbase.rest.client   
org.apache.hadoop.hbase.rest.model   
 

Uses of Scan in org.apache.hadoop.hbase.avro
 

Methods in org.apache.hadoop.hbase.avro that return Scan
static Scan AvroUtil.ascanToScan(AScan ascan)
           
 

Uses of Scan in org.apache.hadoop.hbase.catalog
 

Methods in org.apache.hadoop.hbase.catalog that return Scan
static Scan MetaReader.getScanForTableName(byte[] tableName)
          This method creates a Scan object that will only scan catalog rows that belong to the specified table.
 

Uses of Scan in org.apache.hadoop.hbase.client
 

Methods in org.apache.hadoop.hbase.client that return Scan
 Scan Scan.addColumn(byte[] family, byte[] qualifier)
          Get the column from the specified family with the specified qualifier.
 Scan Scan.addFamily(byte[] family)
          Get all columns from the specified family.
protected  Scan HTable.ClientScanner.getScan()
           
protected  Scan ScannerCallable.getScan()
           
 Scan Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
          Setting the familyMap
 Scan Scan.setFilter(Filter filter)
          Apply the specified server-side filter when performing the Scan.
 Scan Scan.setMaxVersions()
          Get all available versions.
 Scan Scan.setMaxVersions(int maxVersions)
          Get up to the specified number of versions of each column.
 Scan Scan.setStartRow(byte[] startRow)
          Set the start row of the scan.
 Scan Scan.setStopRow(byte[] stopRow)
          Set the stop row.
 Scan Scan.setTimeRange(long minStamp, long maxStamp)
          Get versions of columns only within the specified timestamp range, [minStamp, maxStamp).
 Scan Scan.setTimeStamp(long timestamp)
          Get versions of columns with the specified timestamp.
 

Methods in org.apache.hadoop.hbase.client with parameters of type Scan
 ResultScanner HTable.getScanner(Scan scan)
          Returns a scanner on the current table as specified by the Scan object.
 ResultScanner HTableInterface.getScanner(Scan scan)
          Returns a scanner on the current table as specified by the Scan object.
 

Constructors in org.apache.hadoop.hbase.client with parameters of type Scan
HTable.ClientScanner(Scan scan)
           
Scan(Scan scan)
          Creates a new instance of this class while copying all values.
ScannerCallable(HConnection connection, byte[] tableName, Scan scan)
           
 

Uses of Scan in org.apache.hadoop.hbase.client.coprocessor
 

Methods in org.apache.hadoop.hbase.client.coprocessor with parameters of type Scan
<R,S> double
AggregationClient.avg(byte[] tableName, ColumnInterpreter<R,S> ci, Scan scan)
          This is the client side interface/handle for calling the average method for a given cf-cq combination.
<R,S> R
AggregationClient.max(byte[] tableName, ColumnInterpreter<R,S> ci, Scan scan)
          It gives the maximum value of a column for a given column family for the given range.
<R,S> R
AggregationClient.min(byte[] tableName, ColumnInterpreter<R,S> ci, Scan scan)
          It gives the minimum value of a column for a given column family for the given range.
<R,S> long
AggregationClient.rowCount(byte[] tableName, ColumnInterpreter<R,S> ci, Scan scan)
          It gives the row count, by summing up the individual results obtained from regions.
<R,S> double
AggregationClient.std(byte[] tableName, ColumnInterpreter<R,S> ci, Scan scan)
          This is the client side interface/handle for calling the std method for a given cf-cq combination.
<R,S> S
AggregationClient.sum(byte[] tableName, ColumnInterpreter<R,S> ci, Scan scan)
          It sums up the value returned from various regions.
 

Uses of Scan in org.apache.hadoop.hbase.coprocessor
 

Methods in org.apache.hadoop.hbase.coprocessor with parameters of type Scan
<T,S> Pair<S,Long>
AggregateImplementation.getAvg(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> Pair<S,Long>
AggregateProtocol.getAvg(ColumnInterpreter<T,S> ci, Scan scan)
          Gives a Pair with first object as Sum and second object as row count, computed for a given combination of column qualifier and column family in the given row range as defined in the Scan object.
<T,S> T
AggregateImplementation.getMax(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> T
AggregateProtocol.getMax(ColumnInterpreter<T,S> ci, Scan scan)
          Gives the maximum for a given combination of column qualifier and column family, in the given row range as defined in the Scan object.
<T,S> T
AggregateImplementation.getMin(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> T
AggregateProtocol.getMin(ColumnInterpreter<T,S> ci, Scan scan)
          Gives the minimum for a given combination of column qualifier and column family, in the given row range as defined in the Scan object.
<T,S> long
AggregateImplementation.getRowNum(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> long
AggregateProtocol.getRowNum(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> Pair<List<S>,Long>
AggregateImplementation.getStd(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> Pair<List<S>,Long>
AggregateProtocol.getStd(ColumnInterpreter<T,S> ci, Scan scan)
          Gives a Pair with first object a List containing Sum and sum of squares, and the second object as row count.
<T,S> S
AggregateImplementation.getSum(ColumnInterpreter<T,S> ci, Scan scan)
           
<T,S> S
AggregateProtocol.getSum(ColumnInterpreter<T,S> ci, Scan scan)
          Gives the sum for a given combination of column qualifier and column family, in the given row range as defined in the Scan object.
 RegionScanner RegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
          Called after the client opens a new scanner.
 RegionScanner BaseRegionObserver.postScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s)
           
 RegionScanner RegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> c, Scan scan, RegionScanner s)
          Called before the client opens a new scanner.
 RegionScanner BaseRegionObserver.preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s)
           
 

Uses of Scan in org.apache.hadoop.hbase.ipc
 

Methods in org.apache.hadoop.hbase.ipc with parameters of type Scan
 long HRegionInterface.openScanner(byte[] regionName, Scan scan)
          Opens a remote scanner with a RowFilter.
 

Uses of Scan in org.apache.hadoop.hbase.mapreduce
 

Methods in org.apache.hadoop.hbase.mapreduce that return Scan
 Scan TableInputFormatBase.getScan()
          Gets the scan defining the actual details like columns etc.
 

Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Scan
static void TableInputFormat.addColumns(Scan scan, byte[][] columns)
          Adds an array of columns specified using old format, family:qualifier.
static void IdentityTableMapper.initJob(String table, Scan scan, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void GroupingTableMapper.initJob(String table, Scan scan, String groupColumns, Class<? extends TableMapper> mapper, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(byte[] table, Scan scan, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
          Use this before submitting a TableMap job.
static void TableMapReduceUtil.initTableMapperJob(String table, Scan scan, Class<? extends TableMapper> mapper, Class<? extends org.apache.hadoop.io.WritableComparable> outputKeyClass, Class<? extends org.apache.hadoop.io.Writable> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
          Use this before submitting a TableMap job.
 void TableInputFormatBase.setScan(Scan scan)
          Sets the scan defining the actual details like columns etc.
 void TableRecordReader.setScan(Scan scan)
          Sets the scan defining the actual details like columns etc.
 void TableRecordReaderImpl.setScan(Scan scan)
          Sets the scan defining the actual details like columns etc.
 

Uses of Scan in org.apache.hadoop.hbase.regionserver
 

Methods in org.apache.hadoop.hbase.regionserver with parameters of type Scan
 RegionScanner HRegion.getScanner(Scan scan)
          Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by the Scan.
protected  RegionScanner HRegion.getScanner(Scan scan, List<KeyValueScanner> additionalScanners)
           
 org.apache.hadoop.hbase.regionserver.StoreScanner Store.getScanner(Scan scan, NavigableSet<byte[]> targetCols)
          Return a scanner for both the memstore and the HStore files
protected  RegionScanner HRegion.instantiateRegionScanner(Scan scan, List<KeyValueScanner> additionalScanners)
           
 long HRegionServer.openScanner(byte[] regionName, Scan scan)
           
 RegionScanner RegionCoprocessorHost.postScannerOpen(Scan scan, RegionScanner s)
           
 RegionScanner RegionCoprocessorHost.preScannerOpen(Scan scan)
           
 boolean MemStore.shouldSeek(Scan scan)
          Check if this memstore may contain the required keys
 boolean StoreFile.Reader.shouldSeek(Scan scan, SortedSet<byte[]> columns)
           
 

Constructors in org.apache.hadoop.hbase.regionserver with parameters of type Scan
ScanQueryMatcher(Scan scan, byte[] family, NavigableSet<byte[]> columns, long ttl, KeyValue.KeyComparator rowComparator, int maxVersions)
           
ScanQueryMatcher(Scan scan, byte[] family, NavigableSet<byte[]> columns, long ttl, KeyValue.KeyComparator rowComparator, int minVersions, int maxVersions)
           
ScanQueryMatcher(Scan scan, byte[] family, NavigableSet<byte[]> columns, long ttl, KeyValue.KeyComparator rowComparator, int minVersions, int maxVersions, boolean retainDeletesInOutput, long readPointToUse)
          Constructs a ScanQueryMatcher for a Scan.
 

Uses of Scan in org.apache.hadoop.hbase.rest.client
 

Methods in org.apache.hadoop.hbase.rest.client with parameters of type Scan
 ResultScanner RemoteHTable.getScanner(Scan scan)
           
 

Uses of Scan in org.apache.hadoop.hbase.rest.model
 

Methods in org.apache.hadoop.hbase.rest.model with parameters of type Scan
static ScannerModel ScannerModel.fromScan(Scan scan)
           
 



Copyright © 2012 The Apache Software Foundation. All Rights Reserved.