|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Uses of SemanticException in org.apache.hadoop.hive.ql |
---|
Methods in org.apache.hadoop.hive.ql that throw SemanticException | |
---|---|
void |
QTestUtil.resetParser()
|
Uses of SemanticException in org.apache.hadoop.hive.ql.exec |
---|
Subclasses of SemanticException in org.apache.hadoop.hive.ql.exec | |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method is not found. |
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method is found. |
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong. |
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length. |
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types. |
Methods in org.apache.hadoop.hive.ql.exec that throw SemanticException | |
---|---|
static GenericUDAFEvaluator |
FunctionRegistry.getGenericUDAFEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses. |
static GenericUDAFEvaluator |
FunctionRegistry.getGenericWindowingEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns)
|
void |
Operator.removeChildAndAdoptItsChildren(Operator<? extends OperatorDesc> child)
Remove a child and add all of the child's children to the location of the child |
static void |
Utilities.reworkMapRedWork(Task<? extends Serializable> task,
boolean reworkMapredWork,
HiveConf conf)
The check here is kind of not clean. |
static void |
Utilities.validateColumnNames(List<String> colNames,
List<String> checkCols)
|
static void |
Utilities.validatePartSpec(Table tbl,
Map<String,String> partSpec)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.hooks |
---|
Methods in org.apache.hadoop.hive.ql.hooks that throw SemanticException | |
---|---|
void |
VerifyHooksRunInOrder.RunFirstSemanticAnalysisHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
void |
VerifyHooksRunInOrder.RunSecondSemanticAnalysisHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
ASTNode |
VerifyHooksRunInOrder.RunFirstSemanticAnalysisHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
ASTNode |
VerifyHooksRunInOrder.RunSecondSemanticAnalysisHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.lib |
---|
Methods in org.apache.hadoop.hive.ql.lib that throw SemanticException | ||
---|---|---|
int |
RuleRegExp.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack. |
|
int |
RuleExactMatch.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack. |
|
int |
Rule.cost(Stack<Node> stack)
|
|
void |
TaskGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
|
|
void |
DefaultGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
Dispatch the current operator. |
|
Object |
Dispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
Dispatcher function. |
|
Object |
DefaultRuleDispatcher.dispatch(Node nd,
Stack<Node> ndStack,
Object... nodeOutputs)
Dispatcher function. |
|
void |
TaskGraphWalker.dispatch(Node nd,
Stack<Node> ndStack,
TaskGraphWalker.TaskGraphWalkerContext walkerCtx)
Dispatch the current operator. |
|
|
DefaultGraphWalker.dispatchAndReturn(Node nd,
Stack<Node> ndStack)
Returns dispatch result |
|
Object |
NodeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Generic process for all ops that don't have specific implementations. |
|
void |
GraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
|
void |
TaskGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
|
void |
DefaultGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
|
void |
PreOrderWalker.walk(Node nd)
Walk the current operator and its descendants. |
|
void |
TaskGraphWalker.walk(Node nd)
walk the current operator and its descendants. |
|
void |
DefaultGraphWalker.walk(Node nd)
walk the current operator and its descendants. |
Uses of SemanticException in org.apache.hadoop.hive.ql.metadata |
---|
Methods in org.apache.hadoop.hive.ql.metadata that throw SemanticException | |
---|---|
void |
DummySemanticAnalyzerHook1.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
void |
DummySemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
ASTNode |
DummySemanticAnalyzerHook1.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
ASTNode |
DummySemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer |
---|
Methods in org.apache.hadoop.hive.ql.optimizer that throw SemanticException | |
---|---|
protected boolean |
AbstractSMBJoinProc.canConvertBucketMapJoinToSMBJoin(MapJoinOperator mapJoinOp,
Stack<Node> stack,
SortBucketJoinProcCtx smbJoinContext,
Object... nodeOutputs)
|
protected boolean |
AbstractSMBJoinProc.canConvertJoinToBucketMapJoin(JoinOperator joinOp,
ParseContext pGraphContext,
SortBucketJoinProcCtx context)
|
protected boolean |
AbstractSMBJoinProc.canConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext,
ParseContext pGraphContext)
|
protected boolean |
AbstractBucketJoinProc.canConvertMapJoinToBucketMapJoin(MapJoinOperator mapJoinOp,
ParseContext pGraphContext,
BucketJoinProcCtx context)
|
protected boolean |
AbstractBucketJoinProc.checkConvertBucketMapJoin(ParseContext pGraphContext,
BucketJoinProcCtx context,
QBJoinTree joinCtx,
Map<Byte,List<ExprNodeDesc>> keysMap,
String baseBigAlias,
List<String> joinAliases)
|
protected boolean |
AbstractSMBJoinProc.checkConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext,
ParseContext pGraphContext)
|
protected GroupByOptimizer.GroupByOptimizerSortMatch |
GroupByOptimizer.SortGroupByProcessor.checkSortGroupBy(Stack<Node> stack,
GroupByOperator groupByOp)
|
protected MapJoinOperator |
AbstractSMBJoinProc.convertJoinToBucketMapJoin(JoinOperator joinOp,
SortBucketJoinProcCtx joinContext,
ParseContext parseContext)
|
protected void |
AbstractSMBJoinProc.convertJoinToSMBJoin(JoinOperator joinOp,
SortBucketJoinProcCtx smbJoinContext,
ParseContext parseContext)
|
static MapJoinOperator |
MapJoinProcessor.convertMapJoin(LinkedHashMap<Operator<? extends OperatorDesc>,OpParseContext> opParseCtxMap,
JoinOperator op,
QBJoinTree joinTree,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join. |
protected void |
AbstractBucketJoinProc.convertMapJoinToBucketMapJoin(MapJoinOperator mapJoinOp,
BucketJoinProcCtx context)
|
static MapJoinOperator |
MapJoinProcessor.convertSMBJoinToMapJoin(Map<Operator<? extends OperatorDesc>,OpParseContext> opParseCtxMap,
SMBMapJoinOperator smbJoinOp,
QBJoinTree joinTree,
int bigTablePos,
boolean noCheckOuterJoin)
convert a sortmerge join to a a map-side join. |
List<String> |
ColumnPrunerProcCtx.genColLists(Operator<? extends OperatorDesc> curOp)
Creates the list of internal column names(these names are used in the RowResolver and are different from the external column names) that are needed in the subtree. |
MapJoinOperator |
MapJoinProcessor.generateMapJoinOperator(ParseContext pctx,
JoinOperator op,
QBJoinTree joinTree,
int mapJoinPos)
|
protected abstract void |
PrunerOperatorFactory.FilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top)
Generate predicate. |
static String |
MapJoinProcessor.genLocalWorkForMapJoin(MapredWork newWork,
MapJoinOperator newMapJoinOp,
int mapJoinPos)
|
static String |
MapJoinProcessor.genMapJoinOpAndLocalWork(MapredWork newWork,
JoinOperator op,
int mapJoinPos)
Convert the join to a map-join and also generate any local work needed. |
int |
TableSizeBasedBigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseCtx,
JoinOperator joinOp,
Set<Integer> bigTableCandidates)
|
int |
BigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseContext,
JoinOperator joinOp,
Set<Integer> joinCandidates)
|
int |
AvgPartitionSizeBasedBigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseCtx,
JoinOperator joinOp,
Set<Integer> bigTableCandidates)
|
static List<Index> |
IndexUtils.getIndexes(Table baseTableMetaData,
List<String> matchIndexTypes)
Get a list of indexes on a table that match given types. |
List<String> |
ColumnPrunerProcCtx.getSelectColsFromLVJoin(RowResolver rr,
List<String> colList)
Create the list of internal columns for select tag of LV |
static void |
GenMapRedUtils.initPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current plan by adding it to root tasks. |
static void |
GenMapRedUtils.initUnionPlan(GenMRProcContext opProcCtx,
UnionOperator currUnionOp,
Task<? extends Serializable> currTask,
boolean local)
|
static void |
GenMapRedUtils.initUnionPlan(ReduceSinkOperator op,
UnionOperator currUnionOp,
GenMRProcContext opProcCtx,
Task<? extends Serializable> unionTask)
Initialize the current union plan. |
static void |
GenMapRedUtils.joinPlan(Task<? extends Serializable> currTask,
Task<? extends Serializable> oldTask,
GenMRProcContext opProcCtx)
Merge the current task into the old task for the reducer |
static void |
GenMapRedUtils.joinUnionPlan(GenMRProcContext opProcCtx,
UnionOperator currUnionOp,
Task<? extends Serializable> currentUnionTask,
Task<? extends Serializable> existingTask,
boolean local)
|
static SamplePruner.LimitPruneRetStatus |
SamplePruner.limitPrune(Partition part,
long sizeLimit,
int fileLimit,
Collection<Path> retPathList)
Try to generate a list of subset of files in the partition to reach a size limit with number of files less than fileLimit |
ParseContext |
Optimizer.optimize()
Invoke all the transformations one-by-one, and alter the query plan. |
Object |
GenMRTableScan1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Table Sink encountered. |
Object |
SkewJoinOptimizer.SkewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRRedSink3.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
PrunerExpressionOperatorFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrunerExpressionOperatorFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrunerExpressionOperatorFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrunerExpressionOperatorFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRUnion1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Union Operator encountered . |
Object |
BucketingSortingReduceSinkOptimizer.BucketSortReduceSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRRedSink2.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
SamplePruner.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
SamplePruner.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrunerOperatorFactory.FilterPruner.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrunerOperatorFactory.DefaultPruner.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketMapjoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerPTFProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewForwardProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
abstract Object |
AbstractBucketJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GroupByOptimizer.SortGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GroupByOptimizer.SortGroupBySkewProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinProcessor.CurrentMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in the context. |
Object |
MapJoinProcessor.MapJoinFS.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in a list of mapjoins followed by a filesink. |
Object |
MapJoinProcessor.MapJoinDefault.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the mapjoin in a rejected list. |
Object |
MapJoinProcessor.Default.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Nothing to do. |
Object |
SortedMergeBucketMapjoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
SortedMergeJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRRedSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Sink encountered. |
Object |
GenMRFileSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
File Sink Operator encountered. |
Object |
GenMROperator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
abstract Object |
AbstractSMBJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
protected void |
GroupByOptimizer.SortGroupByProcessor.processGroupBy(GroupByOptimizer.GroupByOptimizerContext ctx,
Stack<Node> stack,
GroupByOperator groupByOp,
int depth)
|
static Path[] |
SamplePruner.prune(Partition part,
FilterDesc.sampleDesc sampleDescr)
Prunes to get all the files in the partition that satisfy the TABLESAMPLE clause. |
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
Operator<? extends OperatorDesc> topOp,
Task<?> task,
boolean local,
GenMRProcContext opProcCtx)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
Operator<? extends OperatorDesc> topOp,
Task<?> task,
boolean local,
GenMRProcContext opProcCtx,
PrunedPartitionList pList)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.setTaskPlan(String path,
String alias,
Operator<? extends OperatorDesc> topOp,
MapWork plan,
boolean local,
TableDesc tt_desc)
set the current task in the mapredWork. |
ParseContext |
Transform.transform(ParseContext pctx)
All transformation steps implement this interface. |
ParseContext |
LimitPushdownOptimizer.transform(ParseContext pctx)
|
ParseContext |
SkewJoinOptimizer.transform(ParseContext pctx)
|
ParseContext |
SimpleFetchAggregation.transform(ParseContext pctx)
|
ParseContext |
BucketingSortingReduceSinkOptimizer.transform(ParseContext pctx)
|
ParseContext |
NonBlockingOpDeDupProc.transform(ParseContext pctx)
|
ParseContext |
SamplePruner.transform(ParseContext pctx)
|
ParseContext |
GlobalLimitOptimizer.transform(ParseContext pctx)
|
ParseContext |
GroupByOptimizer.transform(ParseContext pctx)
|
ParseContext |
MapJoinProcessor.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
JoinReorder.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
SortedMergeBucketMapJoinOptimizer.transform(ParseContext pctx)
|
ParseContext |
ColumnPruner.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
SimpleFetchOptimizer.transform(ParseContext pctx)
|
ParseContext |
BucketMapJoinOptimizer.transform(ParseContext pctx)
|
void |
ColumnPruner.ColumnPrunerWalker.walk(Node nd)
Walk the given operator. |
static Map<Node,Object> |
PrunerUtils.walkExprTree(ExprNodeDesc pred,
NodeProcessorCtx ctx,
NodeProcessor colProc,
NodeProcessor fieldProc,
NodeProcessor genFuncProc,
NodeProcessor defProc)
Walk expression tree for pruner generation. |
static void |
PrunerUtils.walkOperatorTree(ParseContext pctx,
NodeProcessorCtx opWalkerCtx,
NodeProcessor filterProc,
NodeProcessor defaultProc)
Walk operator tree for pruner generation. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.correlation |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.correlation that throw SemanticException | ||
---|---|---|
protected static void |
QueryPlanTreeTransformation.applyCorrelation(ParseContext pCtx,
CorrelationOptimizer.CorrelationNodeProcCtx corrCtx,
IntraQueryCorrelation correlation)
Based on the correlation, we transform the query plan tree (operator tree). |
|
protected static
|
CorrelationUtilities.findParents(JoinOperator join,
Class<T> target)
|
|
protected static
|
CorrelationUtilities.findPossibleParent(Operator<?> start,
Class<T> target,
boolean trustScript)
|
|
protected static
|
CorrelationUtilities.findPossibleParents(Operator<?> start,
Class<T> target,
boolean trustScript)
|
|
static List<Operator<? extends OperatorDesc>> |
CorrelationUtilities.findSiblingOperators(Operator<? extends OperatorDesc> op)
Find all sibling operators (which have the same child operator of op) of op (op included). |
|
static List<ReduceSinkOperator> |
CorrelationUtilities.findSiblingReduceSinkOperators(ReduceSinkOperator op)
Find all sibling ReduceSinkOperators (which have the same child operator of op) of op (op included). |
|
protected static Operator<?> |
CorrelationUtilities.getSingleChild(Operator<?> operator)
|
|
protected static Operator<?> |
CorrelationUtilities.getSingleChild(Operator<?> operator,
boolean throwException)
|
|
protected static
|
CorrelationUtilities.getSingleChild(Operator<?> operator,
Class<T> type)
|
|
protected static Operator<?> |
CorrelationUtilities.getSingleParent(Operator<?> operator)
|
|
protected static Operator<?> |
CorrelationUtilities.getSingleParent(Operator<?> operator,
boolean throwException)
|
|
protected static
|
CorrelationUtilities.getSingleParent(Operator<?> operator,
Class<T> type)
|
|
protected static Operator<?> |
CorrelationUtilities.getStartForGroupBy(ReduceSinkOperator cRS)
|
|
protected static boolean |
CorrelationUtilities.hasGroupingSet(ReduceSinkOperator cRS)
|
|
protected static int |
CorrelationUtilities.indexOf(ExprNodeDesc cexpr,
ExprNodeDesc[] pexprs,
Operator child,
Operator[] parents,
boolean[] sorted)
|
|
protected static void |
CorrelationUtilities.insertOperatorBetween(Operator<?> newOperator,
Operator<?> parent,
Operator<?> child)
|
|
protected static void |
CorrelationUtilities.isNullOperator(Operator<?> operator)
throw a exception if the input operator is null |
|
protected boolean |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.merge(ReduceSinkOperator cRS,
JoinOperator pJoin,
int minReducer)
|
|
protected boolean |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.merge(ReduceSinkOperator cRS,
ReduceSinkOperator pRS,
int minReducer)
Current RSDedup remove/replace child RS. |
|
Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
|
protected abstract Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(ReduceSinkOperator cRS,
GroupByOperator cGBY,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx)
|
|
protected abstract Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(ReduceSinkOperator cRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx)
|
|
protected static void |
CorrelationUtilities.removeReduceSinkForGroupBy(ReduceSinkOperator cRS,
GroupByOperator cGBYr,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx)
|
|
protected static SelectOperator |
CorrelationUtilities.replaceOperatorWithSelect(Operator<?> operator,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx)
|
|
protected static SelectOperator |
CorrelationUtilities.replaceReduceSinkWithSelectOperator(ReduceSinkOperator childRS,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx)
|
|
protected Integer |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.sameKeys(List<ExprNodeDesc> cexprs,
List<ExprNodeDesc> pexprs,
Operator<?> child,
Operator<?> parent)
|
|
ParseContext |
ReduceSinkDeDuplication.transform(ParseContext pctx)
|
|
ParseContext |
CorrelationOptimizer.transform(ParseContext pctx)
Detect correlations and transform the query tree. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.index |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.index that throw SemanticException | |
---|---|
static ParseContext |
RewriteParseContextGenerator.generateOperatorTree(HiveConf conf,
String command)
Parse the input String command and generate a ASTNode tree. |
void |
RewriteQueryUsingAggregateIndexCtx.invokeRewriteQueryProc(Operator<? extends OperatorDesc> topOp)
Walk the original operator tree using the DefaultGraphWalker using the rules. |
ParseContext |
RewriteGBUsingIndex.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.lineage |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.lineage that throw SemanticException | |
---|---|
static LineageInfo.Dependency |
ExprProcFactory.getExprDependency(LineageCtx lctx,
Operator<? extends OperatorDesc> inpOp,
ExprNodeDesc expr)
Gets the expression dependencies for the expression. |
Object |
OpProcFactory.TransformLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.TableScanLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.JoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.LateralViewJoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.SelectLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.GroupByLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.UnionLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ReduceSinkLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.GenericExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
Generator.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.listbucketingpruner |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.listbucketingpruner that throw SemanticException | |
---|---|
static List<List<String>> |
ListBucketingPruner.DynamicMultiDimensionalCollection.flat(List<List<String>> uniqSkewedElements)
Flat a dynamic-multi-dimension collection. |
static List<List<String>> |
ListBucketingPruner.DynamicMultiDimensionalCollection.generateCollection(List<List<String>> values)
Find out complete skewed-element collection For example: 1. |
protected void |
LBProcFactory.LBPRFilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top)
|
protected void |
LBPartitionProcFactory.LBPRPartitionFilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top)
|
static ExprNodeDesc |
LBExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred,
Partition part)
Generates the list bucketing pruner for the expression tree. |
void |
DynamicMultiDimeCollectionTest.testFlat1()
|
void |
DynamicMultiDimeCollectionTest.testFlat2()
|
void |
DynamicMultiDimeCollectionTest.testFlat3()
|
ParseContext |
ListBucketingPruner.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.pcr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.pcr that throw SemanticException | |
---|---|
Object |
PcrExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrOpProcFactory.FilterPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrOpProcFactory.DefaultPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PartitionConditionRemover.transform(ParseContext pctx)
|
static PcrExprProcFactory.NodeInfoWrapper |
PcrExprProcFactory.walkExprTree(String tabAlias,
ArrayList<Partition> parts,
List<VirtualColumn> vcs,
ExprNodeDesc pred)
Remove partition conditions when necessary from the the expression tree. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.physical |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.physical that throw SemanticException | |
---|---|
Object |
AbstractJoinTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
|
long |
AbstractJoinTaskDispatcher.getTotalKnownInputSize(Context context,
MapWork currWork,
Map<String,ArrayList<String>> pathToAliases,
HashMap<String,Long> aliasToSize)
|
PhysicalContext |
PhysicalOptimizer.optimize()
invoke all the resolvers one-by-one, and alter the physical plan. |
Object |
LocalMapJoinProcFactory.MapJoinFollowedByGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
LocalMapJoinProcFactory.LocalMapJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.DefaultInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.JoinInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.SelectInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.FileSinkInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.ExtractInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.MultiGroupByInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.GroupByInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
BucketingSortingOpProcFactory.ForwardingInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
SkewJoinProcFactory.SkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
SkewJoinProcFactory.SkewJoinDefaultProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Task<? extends Serializable> |
SortMergeJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context)
|
Task<? extends Serializable> |
CommonJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context)
|
abstract Task<? extends Serializable> |
AbstractJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context)
|
static void |
GenMRSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ParseContext parseCtx)
Create tasks for processing skew joins. |
PhysicalContext |
CommonJoinResolver.resolve(PhysicalContext pctx)
|
PhysicalContext |
SortMergeJoinResolver.resolve(PhysicalContext pctx)
|
PhysicalContext |
IndexWhereResolver.resolve(PhysicalContext physicalContext)
|
PhysicalContext |
SkewJoinResolver.resolve(PhysicalContext pctx)
|
PhysicalContext |
BucketingSortingInferenceOptimizer.resolve(PhysicalContext pctx)
|
PhysicalContext |
SamplingOptimizer.resolve(PhysicalContext pctx)
|
PhysicalContext |
PhysicalPlanResolver.resolve(PhysicalContext pctx)
All physical plan resolvers have to implement this entry method. |
PhysicalContext |
MetadataOnlyOptimizer.resolve(PhysicalContext pctx)
|
PhysicalContext |
MapJoinResolver.resolve(PhysicalContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.physical.index |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.physical.index that throw SemanticException | |
---|---|
Object |
IndexWhereTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
|
Object |
IndexWhereProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.ppr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.ppr that throw SemanticException | |
---|---|
protected void |
OpProcFactory.FilterPPR.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top)
|
static ExprNodeDesc |
ExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred)
Generates the partition pruner for the expression tree. |
ParseContext |
PartitionPruner.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.unionproc |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.unionproc that throw SemanticException | |
---|---|
Object |
UnionProcFactory.MapRedUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.MapUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.UnknownUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.UnionNoProcessFile.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.NoUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
UnionProcessor.transform(ParseContext pCtx)
Transform the query tree. |
Uses of SemanticException in org.apache.hadoop.hive.ql.parse |
---|
Methods in org.apache.hadoop.hive.ql.parse that throw SemanticException | |
---|---|
protected static ArrayList<PTFInvocationSpec.OrderExpression> |
PTFTranslator.addPartitionExpressionsToOrderList(ArrayList<PTFInvocationSpec.PartitionExpression> partCols,
ArrayList<PTFInvocationSpec.OrderExpression> orderCols)
|
void |
ColumnStatsSemanticAnalyzer.analyze(ASTNode ast,
Context origCtx)
|
void |
BaseSemanticAnalyzer.analyze(ASTNode ast,
Context ctx)
|
ColumnAccessInfo |
ColumnAccessAnalyzer.analyzeColumnAccess()
|
protected void |
BaseSemanticAnalyzer.analyzeDDLSkewedValues(List<List<String>> skewedValues,
ASTNode child)
Handle skewed values in DDL. |
void |
SemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ExportSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
abstract void |
BaseSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
LoadSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
MacroSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ExplainSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
DDLSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
FunctionSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ImportSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
protected List<String> |
BaseSemanticAnalyzer.analyzeSkewedTablDDLColNames(List<String> skewedColNames,
ASTNode child)
Analyze list bucket column names |
TableAccessInfo |
TableAccessAnalyzer.analyzeTableAccess()
|
protected static RowResolver |
PTFTranslator.buildRowResolverForNoop(String tabAlias,
StructObjectInspector rowObjectInspector,
RowResolver inputRowResolver)
|
protected static RowResolver |
PTFTranslator.buildRowResolverForPTF(String tbFnName,
String tabAlias,
StructObjectInspector rowObjectInspector,
ArrayList<String> outputColNames,
RowResolver inputRR)
|
protected RowResolver |
PTFTranslator.buildRowResolverForWindowing(PTFDesc.WindowTableFunctionDef def)
|
static String |
BaseSemanticAnalyzer.charSetString(String charSetName,
String charSetString)
|
void |
RowResolver.checkColumn(String tableAlias,
String columnAlias)
check if column name is already exist in RR |
void |
MapReduceCompiler.compile(ParseContext pCtx,
List<Task<? extends Serializable>> rootTasks,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs)
|
static ArrayList<PTFInvocationSpec> |
PTFTranslator.componentize(PTFInvocationSpec ptfInvocation)
|
static void |
EximUtil.createExportDump(FileSystem fs,
Path metadataPath,
Table tableHandle,
List<Partition> partitions)
|
static void |
EximUtil.doCheckCompatibility(String currVersion,
String version,
String fcVersion)
|
boolean |
SemanticAnalyzer.doPhase1(ASTNode ast,
QB qb,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.Phase1Ctx ctx_1)
Phase 1: (including, but not limited to): 1. |
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias)
|
protected HashMap<String,String> |
BaseSemanticAnalyzer.extractPartitionSpecs(org.antlr.runtime.tree.Tree partspec)
|
Map<ASTNode,ExprNodeDesc> |
SemanticAnalyzer.genAllExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptors for the expression and children of it with default TypeCheckCtx. |
Map<ASTNode,ExprNodeDesc> |
SemanticAnalyzer.genAllExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Generates all of the expression node descriptors for the expression and children of it passed in the arguments. |
static Map<ASTNode,ExprNodeDesc> |
TypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx)
|
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptor for the expression with TypeCheckCtx. |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Returns expression node descriptor for the expression. |
Operator |
SemanticAnalyzer.genPlan(QB qb)
|
static BaseSemanticAnalyzer |
SemanticAnalyzerFactory.get(HiveConf conf,
ASTNode tree)
|
ColumnInfo |
RowResolver.get(String tab_alias,
String col_alias)
Gets the column Info to tab_alias.col_alias type of a column reference. |
static ASTNode |
PTFTranslator.getASTNode(ColumnInfo cInfo,
RowResolver rr)
|
protected List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast)
|
static List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase)
Get the list of FieldSchema out of the ASTNode. |
ColumnInfo |
RowResolver.getExpression(ASTNode node)
Retrieves the ColumnInfo corresponding to a source expression which exactly matches the string rendering of the given ASTNode. |
void |
SemanticAnalyzer.getMetaData(QB qb)
|
void |
SemanticAnalyzer.getMetaData(QB qb,
ReadEntity parentInput)
|
protected List<String> |
BaseSemanticAnalyzer.getSkewedValuesFromASTNode(Node node)
Retrieve skewed values from ASTNode. |
static String |
DDLSemanticAnalyzer.getTypeName(ASTNode node)
|
protected static String |
BaseSemanticAnalyzer.getTypeStringFromAST(ASTNode typeNode)
|
static VarcharTypeParams |
ParseUtils.getVarcharParams(String typeName,
ASTNode node)
|
protected void |
BaseSemanticAnalyzer.handleGenericFileFormat(ASTNode node)
|
WindowingSpec |
WindowingComponentizer.next(HiveConf hCfg,
SemanticAnalyzer semAly,
org.apache.hadoop.hive.ql.parse.UnparseTranslator unparseT,
RowResolver inputRR)
|
static ArrayList<WindowingSpec.WindowExpressionSpec> |
SemanticAnalyzer.parseSelect(String selectExprStr)
|
void |
HiveSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
Invoked after Hive performs its own semantic analysis on a statement (including optimization). |
void |
AbstractSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
ASTNode |
HiveSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
Invoked before Hive performs its own semantic analysis on a statement. |
ASTNode |
AbstractSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
Object |
PrintOpTreeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.NullExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.NumExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.StrExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.BoolExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.DateExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
static ExprNodeDesc |
TypeCheckProcFactory.processGByExpr(Node nd,
Object procCtx)
Function to do groupby subexpression elimination. |
protected void |
SemanticAnalyzer.processNoScanCommand(ASTNode tree)
process analyze ... |
protected void |
SemanticAnalyzer.processPartialScanCommand(ASTNode tree)
process analyze ... |
static Map.Entry<Table,List<Partition>> |
EximUtil.readMetaData(FileSystem fs,
Path metadataPath)
|
static String |
EximUtil.relativeToAbsolutePath(HiveConf conf,
String location)
|
void |
TestEximUtil.testCheckCompatibility()
|
PTFDesc |
PTFTranslator.translate(PTFInvocationSpec qSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
org.apache.hadoop.hive.ql.parse.UnparseTranslator unparseT)
|
PTFDesc |
PTFTranslator.translate(WindowingSpec wdwSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
org.apache.hadoop.hive.ql.parse.UnparseTranslator unparseT)
|
void |
SemanticAnalyzer.validate()
|
void |
BaseSemanticAnalyzer.validate()
|
void |
WindowingSpec.validateAndMakeEffective()
|
static List<String> |
ParseUtils.validateColumnNameUniqueness(List<FieldSchema> fieldSchemas)
|
protected static void |
PTFTranslator.validateComparable(ObjectInspector OI,
String errMsg)
|
static void |
PTFTranslator.validateNoLeadLagInValueBoundarySpec(ASTNode node)
|
static void |
BaseSemanticAnalyzer.validatePartSpec(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf)
|
void |
GenMapRedWalker.walk(Node nd)
Walk the given operator. |
Constructors in org.apache.hadoop.hive.ql.parse that throw SemanticException | |
---|---|
BaseSemanticAnalyzer.tableSpec(Hive db,
HiveConf conf,
ASTNode ast)
|
|
BaseSemanticAnalyzer.tableSpec(Hive db,
HiveConf conf,
ASTNode ast,
boolean allowDynamicPartitionsSpec,
boolean allowPartialPartitionsSpec)
|
|
BaseSemanticAnalyzer(HiveConf conf)
|
|
ColumnStatsSemanticAnalyzer(HiveConf conf)
|
|
ColumnStatsSemanticAnalyzer(HiveConf conf,
ASTNode tree)
|
|
DDLSemanticAnalyzer(HiveConf conf)
|
|
ExplainSemanticAnalyzer(HiveConf conf)
|
|
ExportSemanticAnalyzer(HiveConf conf)
|
|
FunctionSemanticAnalyzer(HiveConf conf)
|
|
ImportSemanticAnalyzer(HiveConf conf)
|
|
LoadSemanticAnalyzer(HiveConf conf)
|
|
MacroSemanticAnalyzer(HiveConf conf)
|
|
SemanticAnalyzer(HiveConf conf)
|
|
WindowingComponentizer(WindowingSpec originalSpec)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.plan |
---|
Methods in org.apache.hadoop.hive.ql.plan that throw SemanticException | |
---|---|
static ExprNodeDesc |
ExprNodeDescUtils.backtrack(ExprNodeDesc source,
Operator<?> current,
Operator<?> terminal)
|
static ArrayList<ExprNodeDesc> |
ExprNodeDescUtils.backtrack(List<ExprNodeDesc> sources,
Operator<?> current,
Operator<?> terminal)
Convert expressions in current operator to those in terminal operator, which is an ancestor of current or null (back to top operator). |
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
ArrayList<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers)
Create the reduce sink descriptor. |
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
int numKeys,
ArrayList<ExprNodeDesc> valueCols,
List<List<Integer>> distinctColIndices,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers)
Create the reduce sink descriptor. |
static Operator<?> |
ExprNodeDescUtils.getSingleParent(Operator<?> current,
Operator<?> terminal)
|
void |
AlterTableDesc.validate()
Validate alter table description. |
void |
CreateTableDesc.validate()
|
static void |
ValidationUtility.validateSkewedColNames(List<String> colNames,
List<String> skewedColNames)
Skewed column name should be a valid column defined. |
static void |
ValidationUtility.validateSkewedColNameValueNumberMatch(List<String> skewedColNames,
List<List<String>> skewedColValues)
Skewed column name and value should match. |
static void |
ValidationUtility.validateSkewedColumnNameUniqueness(List<String> names)
Find out duplicate name. |
static void |
ValidationUtility.validateSkewedInformation(List<String> colNames,
List<String> skewedColNames,
List<List<String>> skewedColValues)
Validate skewed table information. |
Uses of SemanticException in org.apache.hadoop.hive.ql.ppd |
---|
Methods in org.apache.hadoop.hive.ql.ppd that throw SemanticException | |
---|---|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends OperatorDesc> op,
ExprNodeDesc pred)
|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends OperatorDesc> op,
List<ExprNodeDesc> preds)
Extracts pushdown predicates from the given list of predicate expression. |
protected ExprWalkerInfo |
OpProcFactory.DefaultPPD.mergeChildrenPred(Node nd,
OpWalkerInfo owi,
Set<String> excludedAliases,
boolean ignoreAliases)
|
protected boolean |
OpProcFactory.DefaultPPD.mergeWithChildrenPred(Node nd,
OpWalkerInfo owi,
ExprWalkerInfo ewi,
Set<String> aliases,
boolean ignoreAliases)
Take current operators pushdown predicates and merges them with children's pushdown predicates. |
Object |
OpProcFactory.ScriptPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.UDTFPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.LateralViewForwardPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.TableScanPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.FilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.JoinPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ReduceSinkPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Converts the reference from child row resolver to current row resolver. |
Object |
ExprWalkerProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PredicateTransitivePropagate.transform(ParseContext pctx)
|
ParseContext |
PredicatePushDown.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.tools |
---|
Methods in org.apache.hadoop.hive.ql.tools that throw SemanticException | |
---|---|
void |
LineageInfo.getLineageInfo(String query)
parses given query and gets the lineage info. |
static void |
LineageInfo.main(String[] args)
|
Object |
LineageInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Implements the process method for the NodeProcessor interface. |
Uses of SemanticException in org.apache.hadoop.hive.ql.udf.generic |
---|
Uses of SemanticException in org.apache.hadoop.hive.ql.udf.ptf |
---|
Methods in org.apache.hadoop.hive.ql.udf.ptf that throw SemanticException | |
---|---|
static ExprNodeDesc |
MatchPath.ResultExpressionParser.buildExprNode(ASTNode expr,
TypeCheckCtx typeCheckCtx)
|
protected static RowResolver |
MatchPath.createSelectListRR(MatchPath evaluator,
PTFDesc.PTFInputDef inpDef)
|
abstract ArrayList<String> |
TableFunctionResolver.getOutputColumnNames()
|
ArrayList<String> |
NoopWithMap.NoopWithMapResolver.getRawInputColumnNames()
|
ArrayList<String> |
TableFunctionResolver.getRawInputColumnNames()
|
void |
TableFunctionResolver.initialize(HiveConf cfg,
PTFDesc ptfDesc,
PTFDesc.PartitionedTableFunctionDef tDef)
|
void |
MatchPath.SymbolParser.parse()
|
void |
Noop.NoopResolver.setupOutputOI()
|
void |
NoopWithMap.NoopWithMapResolver.setupOutputOI()
|
void |
WindowingTableFunction.WindowingTableFunctionResolver.setupOutputOI()
|
void |
MatchPath.MatchPathResolver.setupOutputOI()
check structure of Arguments: First arg should be a String then there should be an even number of Arguments: String, expression; expression should be Convertible to Boolean. |
abstract void |
TableFunctionResolver.setupOutputOI()
|
void |
NoopWithMap.NoopWithMapResolver.setupRawInputOI()
|
void |
TableFunctionResolver.setupRawInputOI()
|
void |
MatchPath.ResultExpressionParser.translate()
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |