My Oracle Support Banner

Oracle SQL Connector Frequently Asked Questions(FAQ) (Doc ID 1524207.1)

Last updated on NOVEMBER 03, 2019

Applies to:

Oracle SQL Connector for Hadoop Distributed File System - Version 2.0 and later
Linux x86-64

Purpose

 This document provides answers to frequently asked questions about Oracle SQL Connector for Hadoop Distributed File System (HDFS).

Questions and Answers

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Purpose
Questions and Answers
 What is Oracle SQL Connector?
 What kind of data sources can be accessed by Oracle SQL Connector for HDFS ?
 If using Oracle SQL Connector for HDFS does the Cloudera CDH client have to be installed on all Exadata DB nodes?
 Where to find the documentations for Oracle SQL Connector?
 Where to get additional information?
 Is there any sample code available to use Oracle SQL Connector for HDFS?
 If a OSCH client querying a BDA hive table using SQLPLUS gets query result of 0 rows, but hive/impala returns correct rows, what can you do? 
 The Big Data Connectors User's Guide section on Installing Oracle SQL Connector for HDFS states that for RAC, the software should be placed locally on all nodes, using the same path. However, at step 8 (creating the directory) they state that the directory should be on a shared disk. What does this mean?
 When the above documentation states: Add the Hive JAR files and the Hive conf directory to the HADOOP_CLASSPATH environment variable.  What is that mean?
 The above documentation refers to osch_bin_path.  What is that?
 How can $OSCH_HOME/jlib/orahdfs.jar be run without a password argument?
References

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.