org.apache.hadoop.hbase.io.hfile
Class HFileBlock.FSReaderV1

java.lang.Object
  extended by org.apache.hadoop.hbase.io.hfile.HFileBlock.AbstractFSReader
      extended by org.apache.hadoop.hbase.io.hfile.HFileBlock.FSReaderV1
All Implemented Interfaces:
HFileBlock.FSReader
Enclosing class:
HFileBlock

public static class HFileBlock.FSReaderV1
extends HFileBlock.AbstractFSReader

Reads version 1 blocks from the file system. In version 1 blocks, everything is compressed, including the magic record, if compression is enabled. Everything might be uncompressed if no compression is used. This reader returns blocks represented in the uniform version 2 format in memory.


Field Summary
 
Fields inherited from class org.apache.hadoop.hbase.io.hfile.HFileBlock.AbstractFSReader
compressAlgo, DEFAULT_BUFFER_SIZE, fileSize, istream
 
Constructor Summary
HFileBlock.FSReaderV1(org.apache.hadoop.fs.FSDataInputStream istream, Compression.Algorithm compressAlgo, long fileSize)
           
 
Method Summary
 HFileBlock readBlockData(long offset, long onDiskSizeWithMagic, int uncompressedSizeWithMagic, boolean pread)
          Read a version 1 block.
 
Methods inherited from class org.apache.hadoop.hbase.io.hfile.HFileBlock.AbstractFSReader
blockRange, createBufferedBoundedStream, decompress, readAtOffset
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

HFileBlock.FSReaderV1

public HFileBlock.FSReaderV1(org.apache.hadoop.fs.FSDataInputStream istream,
                             Compression.Algorithm compressAlgo,
                             long fileSize)
Method Detail

readBlockData

public HFileBlock readBlockData(long offset,
                                long onDiskSizeWithMagic,
                                int uncompressedSizeWithMagic,
                                boolean pread)
                         throws IOException
Read a version 1 block. There is no uncompressed header, and the block type (the magic record) is part of the compressed data. This implementation assumes that the bounded range file input stream is needed to stop the decompressor reading into next block, because the decompressor just grabs a bunch of data without regard to whether it is coming to end of the compressed section. The block returned is still a version 2 block, and in particular, its first HFileBlock.HEADER_SIZE bytes contain a valid version 2 header.

Parameters:
offset - the offset of the block to read in the file
onDiskSizeWithMagic - the on-disk size of the version 1 block, including the magic record, which is the part of compressed data if using compression
uncompressedSizeWithMagic - uncompressed size of the version 1 block, including the magic record
Returns:
the newly read block
Throws:
IOException


Copyright © 2012 The Apache Software Foundation. All Rights Reserved.