My Oracle Support Banner

Upgrading Oracle Big Data Appliance Cluster to V4.10 with Mammoth (Software Deployment Bundle) Release V4.10 Frequently Asked Questions (FAQ) (Doc ID 2321625.1)

Last updated on JANUARY 09, 2022

Applies to:

Big Data Appliance Integrated Software - Version 4.6.0 to 4.10.0 [Release 4.6 to 4.10]
Linux x86-64


This document provides answers to frequently asked questions on how to upgrade an Oracle Big Data Appliance cluster to Oracle Big Data Appliance 4.10 using Mammoth (Software Deployment Bundle). 


Questions and Answers

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!

In this Document
Questions and Answers
 Is Oracle Big Data Appliance 4.10 is the Last Release to Support Upgrades for Oracle Linux 5 Clusters and to support Oracle Linux 5?
 When performing an upgrade to Oracle Big Data Appliance (BDA) V4.10 is it ok to be in Maintenance Mode to avoid Cloudera Manager upgrade-related alerts?
 If the BDA has been changed from its original layout will that be a problem during upgrade?
 What exactly is the rolling part of the rolling upgrade?
 Does the cluster need to be in "Good Health" before upgrading to Oracle Big Data Appliance (BDA) V4.10?
 What happens if ssh login of the 'oracle' user to BDA nodes has been disabled for security purposes?
 Will CM Host Inspector "warnings" impact upgrade?
 What Hadoop (CDH) version is on Oracle Big Data Appliance (BDA) V4.10?
 What does Cloudera 5.12.1 Enterprise include?
 Does BDA V4.10 continue to support security options for CDH Hadoop clusters?
 Are any new parcels included with BDA V4.10?
 Are there any new recommendations regarding using Microsoft Active Directory to configure a cluster?
 Does the BDA upgrade to 4.10 impact existing sentry and Kerberos configurations?
 On BDA V4.10 can I independently upgrade the R release and the Oracle R Advanced Analytics for Hadoop release?
 On BDA V4.10 can I independently upgrade the Oracle Distribution for R?
 On BDA V4.10 Can we upgrade Solr to version 6.6?
 On BDA 4.10 with CDH 5.12.1 can the Solr version be upgraded to Apache Solr 5.5.2?
 On any CDH version is it possible to upgrade to a higher version of Solr (for example Apache Solr 5.5.2)?
 Regarding Big Data SQL will the Exadata prerequisites change in BDA V4.20  from previous releases?
 What version of NoSQL is on Oracle Big Data Appliance (BDA) V4.10?
 Are there still two separate bundles for BDA 4.10 one for OL 6 and one for OL 5?
 Are there any special steps to be aware of before upgrading an OL5 cluster that has had HTTPS enabled for Cloudera Manager?
 How many zipfiles does the Mammoth deployment have in BDA V4.10?
 Is a BDA Base Image being released for V4.10.0?
 Each Mammoth bundle consists of four zip files.  Can the second zipfile be used for reimaging as in the past?
 What new features does Oracle Big Data Appliance 4.10.0 include? 
 What about Oracle Big Data SQL 3.2 New Features?
 What about Scripts for Download & Configuration of Apache Zeppelin, Jupyter Notebook, and RStudio?
 What about improved Configuration of Oracle's R Distribution and ORAAH?
 What about Node Migration what is new there?
 What about support for extending secure NoSQL DB clusters?
 What else is new in BDA V4.10?
 Is the UEK kernel upgraded?
 After an upgrade are the new kernel files copied into the Internal USB drive(/dev/sdm) automatically?
 Is Java upgraded in BDA V4.10?
 Regarding HTTPS/Network Encryption, how is that enabled after upgrade?
 What about upgrading with HDFS transparent encryption, KMS Proxy Servers, and Key Trustee Servers?
 Do CM, Hue and Oozie use HTTPS security by default?
 Is there is an interactive bdscli utility on the BDA?
 Is Oracle Big Data Discovery supported on BDA V4.10 at this time?
 Are there any changes for installing/uninstalling the Enterprise Manager plugin on BDA 4.10?
 Can an upgrade to BDA V4.10 be performed from any previous BDA version?
 Does this mean that an upgrade from a pre-V4.1 can not go directly to BDA V4.10.0?
 Does this mean that an upgrade from a V2.4 will have a multiple upgrade path for example?
 If multiple upgrades are required to get to BDA 4.10 does the critical metadata need to be backed up each time?
 Will there be any problem upgrading BDA V4.6 with CDH 5.8.2/5.8.3/5.8.4 to BDA V4.10.0?
 Will there be any problem upgrading any BDA version with a supported CDH version higher than the default to BDA V4.10?
 In BDA V4.10 is TLS 1.0 (TLS v1) disabled by default?
 During or after upgrade to BDA V4.10.0 can you rollback to a previous BDA version?
 Is it possible to dual boot a starter rack to have BDA V4.0 and V4.10 by using one system disk for BDA V4.9 and one for BDA V4.10?
 What about upgrading a CDH client, will that also require a two step approach?
 The BDA servers are at CDH 5.12.1  Is it a problem if client/edge servers are running CDH 5.12.<x>?
 Are all nodes of the cluster, however,  required to have the exact same parcel version?
 Does BDA V4.10 support the BdaDeploy.json file?
 What version of the Oracle BDA Configuration Generation Utility is required for this release?
 Is Spark-on-YARN configured on the BDA V4.10 ?
 If a cluster is running Standalone Spark, what will the Mammoth upgrade do since Spark-on-YARN is now configured in BDA V4.10? 
 What about Impala and Search are they configured on the BDA V4.10?
 If Impala Llama was manually configured will that impact upgrade to BDA V4.10?
 Where is the list of known CDH 5 issues?
 If the Mammoth upgrade fails do I have to create a SR and upload the diagnostic bundle generated?
 Where is the BDA V4.10 documentation?
 What is the direct link to the documentation "New features" section?
 Are there any deprecated features?
 Where is the Big Data Connector's documentation?
 What are the Oracle Big Data Connectors (BDC) version changes?
 Is the Oracle Data Integrator Application Adapter for Hadoop upgraded during a BDA upgrade to 12.2.1?
 What happens if you need to upgrade the Oracle Data Integrator Application Adapter for Hadoop to a version higher than 12.2.1 or 12.1.3?
 What is the impact on upgrade of installing standalone ODI and then enabling Big Data Connectors (BDC)?
 What is the impact on upgrade if there are two separate ODI agents running on the same BDA one maintained by Mammoth and one standalone?
 If a patch is installed, will that be impacted by upgrade?
 Is the CDH Deployment still parcel based?
 When upgrading to BDA V4.10 do any one-off patches need to be applied?
 BDA V4.10 includes CDH 5.12.1.  Is it possible to upgrade to a higher CDH version?
 What about patching after the upgrade now that BDA V4.10 is parcel based?
 Is the expectation that the upgrade removes R packages or Connectors?
 When upgrading to BDA V4.10 if the cluster verification step fails can Mammoth be rerun?
 Is it possible to run ./mammoth -c in the middle of an upgrade to see how things look?
 How can the last step run by Mammoth be found?
 If Mammoth upgrade or install fails, how is it possible to find the step to resume from?
 If there is a problem with a Mammoth step on upgrade or install is it possible to skip it?
 What can be done to resolve "Acquiring installation lock"  errors when adding a client/edge node back into a CDH cluster after upgrade?
 Can the client hostnames and/or priviate InfiniBand IP addresses be changed during the upgrade?
 After upgrade the user 'bdatestuser' was not available.  Why is that?
 After upgrade there is a problem using the 'hdfs' principal when AD Kerberos is enabled.   Why is that?
 What is the recommended way to integrate Kerberos with Active Directory (AD) on the BDA?
 In the case of multiple clusters should the VLANs be the same in all clusters or could be different in every cluster?
 Now that 3 node clusters are supported is it possible to have a 3 node production cluster?
 Where is the Cloudera documentation on CDH 5.12?
 Where is the Cloudera documentation on CDH 5.12.1 Known Issues?
 If MS Active Directory is in trust with MIT Kerberos is Used is there a Way to Block Users from Connecting and Running Jobs or Batch Updates During an Upgrade?
 What about MegaCli64, any changes?
 Is a BDA upgrade possible if one of the servers is down?
 If one of the ILOMs is down is it possible that a BDA upgrade can proceed?
 Is it possible to upgrade the ILOM to the latest version outside of Mammoth?
 Questions/Answers on handling predictive disk failures before/during upgrade.
 If any disks that exhibit "other" errors fail or get into a failing (predictive error) state, right before upgrade or during upgrade what problems will be faced?
 What about a disk that fails in the middle of upgrade?
 Should we remove any failing disks from CM from the DataNode Data Directory, i.e. remove the mount /u0x/hadoop/dfs from there?
 What is a summary of the failed/failing disk issue and upgrade?
 Can you expand a cluster if one of the new nodes has a bad disk?
 If MySQL replication is not working on the MySQL slave, Node 2 by default, is it ok to proceed with the upgrade?
 If a MySQL slave is in the process of a mysql import can upgrade proceed?
  If a cluster expansion is underway and MySQL replication is observed to be off  can it be enabled during the expansion?
 In BDA V4.10 what are the restrictions on extending a cluster?
 Will changing the root password after install is complete affect the system?
 Is it possible to have different root passwords on different nodes prior to upgrade?
 If upgrading with AD Kerberos, why would there appear to be no user: hdfs@REALM.NAME?
 If Sentry is configured and Kerberos is not configured, is it ok to upgrade?
 Does the Mammoth upgrade manage Kafka?
 Does the Mammoth upgrade manage flume?
 The ASR password is requested on upgrade. What happens if it is unavailable?
 If HUE HA enabled following, Instructions to Setup HUE High Availability on Oracle Big Data Appliance Version 4.4 (Doc ID 2116806.1),
is there is any impact on future Mammoth upgrade?
 What is the correct firmware for the PDU, Cisco switches, Infiniband switches(spine/leaf), for Mammoth 4.10?

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.