Oracle SQL Connector Frequently Asked Questions(FAQ)
(Doc ID 1524207.1)
Last updated on NOVEMBER 03, 2019
Applies to:Oracle SQL Connector for Hadoop Distributed File System - Version 2.0 and later
This document provides answers to frequently asked questions about Oracle SQL Connector for Hadoop Distributed File System (HDFS).
To view full details, sign in with your My Oracle Support account.
Don't have a My Oracle Support account? Click to get started!
In this Document
|Questions and Answers|
|What is Oracle SQL Connector?|
|What kind of data sources can be accessed by Oracle SQL Connector for HDFS ?|
|If using Oracle SQL Connector for HDFS does the Cloudera CDH client have to be installed on all Exadata DB nodes?|
|Where to find the documentations for Oracle SQL Connector?|
|Where to get additional information?|
|Is there any sample code available to use Oracle SQL Connector for HDFS?|
|If a OSCH client querying a BDA hive table using SQLPLUS gets query result of 0 rows, but hive/impala returns correct rows, what can you do?|
|When the above documentation states: Add the Hive JAR files and the Hive conf directory to the HADOOP_CLASSPATH environment variable. What is that mean?|
|The above documentation refers to osch_bin_path. What is that?|
|How can $OSCH_HOME/jlib/orahdfs.jar be run without a password argument?|