My Oracle Support Banner

Big Data Cloud Service Frequently Asked Questions (FAQ) (Doc ID 2130171.1)

Last updated on JUNE 13, 2023

Applies to:

Big Data Cloud Service - Version 1.0 and later
Linux x86-64

Purpose

 Provide external FAQ for Big Data Cloud Service (BDCS) issues.

Questions and Answers

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Purpose
Questions and Answers
 How does dcli work on DomU vs Dom0?
 Are hardware related commands are run from Dom0?
 What about disk configuration? Where is that done?
 Does Oracle BDCS use rotational HDD or SSD?
 If HDD is used, why does the command "lsblk -d -o name,rota" show ROTA as "0" instead of "1"?
 What about OS versions?
 Can there be client/edge nodes to the cluster?
 What image is on the VMs?
 What does the output from the 'mount' command look like in DomU?
 What does bdacheckhw look like in DomU? 
 What does bdachecksw look like in DomU?
 Can you run bdacli getinfo commands on DomU?
 What does "./mammoth -c" output look like?
 Can port 80 be opened for installed apps on V1 servers?
 If it is required that a new port be opened, what is the procedure to do that?
 Do firewall ports need to be open between a BDCS environment and an Active Directory server?
 If CM can not be accessed in BDCS what can be done?
 What if Cloudera Manager displays no status for any service?
 If it is not possible to connect using SSH or Putty what can be done?
 What else is known about the BDCS network allowlist?
 Does ODBC connectivity to HiveServer2 require opening port 10000?
 In the BDCS environment is the expectation that the Hue login be the same as the login to CM?
 How can you access Spark from the Cloudera Manager console in a BDCS environment.
 How is upgrade from BDA V4.2.1 to BDA 4.4.0 done?
 Should the Hive Gateway be in an "Active" state in CM?
 How can a trusted certificate be obtained for the Oracle Cloud Cluster?
 What is the best way to move data from HDFS on a BDCS cluster to cloud storage?
 What is the size limit for distcp copies?
 How it is possible to transfer 5GB? It seems like there is a failure at around 4.8GB transfers?
 What is the best method to ensure that the data is copied with out any corruption?
 When moving data from HDFS on a BDCS cluter to cloud storage is there an expectation that there would be a bottleneck at the destination site?
 Where can I find information on Troubleshooting swift?
 When copying to Cloud Storage can we can take advantage of the Upload CLI tool?
 Where would I get the Upload CLI Tool to evaluate it?
 Since the Upload CLI Tool is not HDFS aware is it possible to workaround this by using Fuse/HDFS NFS Gateway?
 Can an HDFS NFS Gateway run on a node if a Linux NFS server is running on a node?
 In a BDCS Mammoth 4.5 environment can bdacli enable/disable be used as in the on-premise environment?
 Does this mean that in BDCS Mammoth 4.5 environment that 'bdacli enable Kerberos' can not be used to enable Kerberos/Sentry after provisioning without Kerberos support?
 Does this mean that in a BDCS Mammoth 4.5 environment that bdacli can not be used to enable hdfs transparent encryption as well?
 Is the policy to install software on BDCS as same as BDA ?
 Where can I find information about using the hadoop client parallelize copying?
 Is Solr included in BDCS?
 Is it possible to install third party software on BDCS?
 Can Apache Zeppelin Helium be installed on a BDCS cluster?
 Is it possible to use the Rest API with Zeppelin Notebooks?
 What information is available on BDCS nodes - Permanent vs Compute?
 What information is available on adding permanent or bursting nodes to a BDCS cluster?
 Is there any problem rebooting a node in the BDCS environment?
 On BDCS where are the instructions for provisioning VPN?
 Is it possible to deploy a custom web service application on BDCS which needs to be accessed from outside the BDCS environment without enabling VPN?
 If a cluster provision fails on setup but then completes is there a concern that the cluster is not created correctly?
 Questions about Kerberos Principals
 Should the hdfs/clustername and oracle/clustername principals be created by default in a BDCS cluster?
 Is it safe to create either oracle/clustername or hdfs/clustername if they are required? 
 What about the principal oracle@<REALM_NAME> which is shown with listprincs on a BDCS 4.7.3 cluster?  
 If an hdfs principal is created manually does it lack any functionality that the Cloudera created hdfs principals have?
 If the CM password for this principal, hdfs/<CLUSTERNAME>@BDACLOUDSERVICE.ORACLE.COM, is not working can a new one be issued?
 Is it possible to kinit Cloudera Manager created principals with the CM admin password in a BDCS environment?
 In this environment is there a default password for the oracle user?
 Is installing Docker on BDCS nodes supported?
 Is it possible then to install OL7 on BDCS clusters?
 Questions about ODI on BDCS
 Is the  ODIAgent already deployed on the BDCS cluster? If not, on which node should weblogic+ODIAgent be installed?
 Does the Agent have to be deployed on all nodes if Hive and HDFS are going to be used? Or is deploying it on 1 node sufficient?
 Does  BDCS include an Oracle DB in which the metadata repository can be created? 
 Can ODI that comes with BDCS be upgraded to a newer version e.g. (12.2.1.2.6)?
 Is there any note or technical brief or how to, to do tuning activities on MySQL database that comes with BDCS?
References

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.