org.apache.pig.builtin
Class BinaryStorage

java.lang.Object
  extended by org.apache.pig.builtin.Utf8StorageConverter
      extended by org.apache.pig.builtin.BinaryStorage
All Implemented Interfaces:
LoadFunc, StoreFunc

public class BinaryStorage
extends Utf8StorageConverter
implements LoadFunc, StoreFunc

BinaryStorage is a simple, as-is, serializer/deserializer pair. It is LoadFunc which loads all the given data from the given InputStream into a single Tuple and a StoreFunc which writes out all input data as a single Tuple. BinaryStorage is intended to work in cases where input files are to be sent in-whole for processing without any splitting and interpretation of their data.


Field Summary
protected  int bufferSize
           
protected  long end
           
protected  BufferedPositionedInputStream in
           
protected  long offset
           
 
Fields inherited from class org.apache.pig.builtin.Utf8StorageConverter
mBagFactory, mLog, mTupleFactory
 
Constructor Summary
BinaryStorage()
          Create a BinaryStorage with default buffer size for reading inputs.
BinaryStorage(int bufferSize)
          Create a BinaryStorage with the given buffer-size for reading inputs.
 
Method Summary
 void bindTo(OutputStream out)
          Specifies the OutputStream to write to.
 void bindTo(String fileName, BufferedPositionedInputStream in, long offset, long end)
          Specifies a portion of an InputStream to read tuples.
 Schema determineSchema(String fileName, ExecType execType, DataStorage storage)
          Find the schema from the loader.
 boolean equals(Object obj)
           
 void fieldsToRead(Schema schema)
          Indicate to the loader fields that will be needed.
 void finish()
          Do any kind of post processing because the last tuple has been stored.
 Tuple getNext()
          Retrieves the next tuple to be processed.
 void putNext(Tuple f)
          Write a tuple the output stream to which this instance was previously bound.
 String toString()
           
 
Methods inherited from class org.apache.pig.builtin.Utf8StorageConverter
bytesToBag, bytesToCharArray, bytesToDouble, bytesToFloat, bytesToInteger, bytesToLong, bytesToMap, bytesToTuple, toBytes, toBytes, toBytes, toBytes, toBytes, toBytes, toBytes, toBytes
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
 
Methods inherited from interface org.apache.pig.LoadFunc
bytesToBag, bytesToCharArray, bytesToDouble, bytesToFloat, bytesToInteger, bytesToLong, bytesToMap, bytesToTuple
 

Field Detail

bufferSize

protected int bufferSize

in

protected BufferedPositionedInputStream in

offset

protected long offset

end

protected long end
Constructor Detail

BinaryStorage

public BinaryStorage()
Create a BinaryStorage with default buffer size for reading inputs.


BinaryStorage

public BinaryStorage(int bufferSize)
Create a BinaryStorage with the given buffer-size for reading inputs.

Parameters:
bufferSize - buffer size to be used
Method Detail

bindTo

public void bindTo(String fileName,
                   BufferedPositionedInputStream in,
                   long offset,
                   long end)
            throws IOException
Description copied from interface: LoadFunc
Specifies a portion of an InputStream to read tuples. Because the starting and ending offsets may not be on record boundaries it is up to the implementor to deal with figuring out the actual starting and ending offsets in such a way that an arbitrarily sliced up file will be processed in its entirety.

A common way of handling slices in the middle of records is to start at the given offset and, if the offset is not zero, skip to the end of the first record (which may be a partial record) before reading tuples. Reading continues until a tuple has been read that ends at an offset past the ending offset.

The load function should not do any buffering on the input stream. Buffering will cause the offsets returned by is.getPos() to be unreliable.

Specified by:
bindTo in interface LoadFunc
Parameters:
fileName - the name of the file to be read
in - the stream representing the file to be processed, and which can also provide its position.
offset - the offset to start reading tuples.
end - the ending offset for reading.
Throws:
IOException

getNext

public Tuple getNext()
              throws IOException
Description copied from interface: LoadFunc
Retrieves the next tuple to be processed.

Specified by:
getNext in interface LoadFunc
Returns:
the next tuple to be processed or null if there are no more tuples to be processed.
Throws:
IOException

bindTo

public void bindTo(OutputStream out)
            throws IOException
Description copied from interface: StoreFunc
Specifies the OutputStream to write to. This will be called before store(Tuple) is invoked.

Specified by:
bindTo in interface StoreFunc
Parameters:
out - The stream to write tuples to.
Throws:
IOException

finish

public void finish()
            throws IOException
Description copied from interface: StoreFunc
Do any kind of post processing because the last tuple has been stored. DO NOT CLOSE THE STREAM in this method. The stream will be closed later outside of this function.

Specified by:
finish in interface StoreFunc
Throws:
IOException

putNext

public void putNext(Tuple f)
             throws IOException
Description copied from interface: StoreFunc
Write a tuple the output stream to which this instance was previously bound.

Specified by:
putNext in interface StoreFunc
Parameters:
f - the tuple to store.
Throws:
IOException

toString

public String toString()
Overrides:
toString in class Object

equals

public boolean equals(Object obj)
Overrides:
equals in class Object

determineSchema

public Schema determineSchema(String fileName,
                              ExecType execType,
                              DataStorage storage)
                       throws IOException
Description copied from interface: LoadFunc
Find the schema from the loader. This function will be called at parse time (not run time) to see if the loader can provide a schema for the data. The loader may be able to do this if the data is self describing (e.g. JSON). If the loader cannot determine the schema, it can return a null. LoadFunc implementations which need to open the input "fileName", can use FileLocalizer.open(String fileName, ExecType execType, DataStorage storage) to get an InputStream which they can use to initialize their loader implementation. They can then use this to read the input data to discover the schema. Note: this will work only when the fileName represents a file on Local File System or Hadoop file system

Specified by:
determineSchema in interface LoadFunc
Parameters:
fileName - Name of the file to be read.(this will be the same as the filename in the "load statement of the script)
execType - - execution mode of the pig script - one of ExecType.LOCAL or ExecType.MAPREDUCE
storage - - the DataStorage object corresponding to the execType
Returns:
a Schema describing the data if possible, or null otherwise.
Throws:
IOException

fieldsToRead

public void fieldsToRead(Schema schema)
Description copied from interface: LoadFunc
Indicate to the loader fields that will be needed. This can be useful for loaders that access data that is stored in a columnar format where indicating columns to be accessed a head of time will save scans. If the loader function cannot make use of this information, it is free to ignore it.

Specified by:
fieldsToRead in interface LoadFunc
Parameters:
schema - Schema indicating which columns will be needed.


Copyright © ${year} The Apache Software Foundation