My Oracle Support Banner

After Mammoth Upgrade DataNodes are in Bad Health with Cloudera Manager "Block Count" Errors (Doc ID 2523491.1)

Last updated on NOVEMBER 08, 2019

Applies to:

Big Data Appliance Integrated Software - Version 4.10.0 and later
Linux x86-64

Symptoms

Immediately after Mammoth upgrade all DataNodes in the cluster are in bad health with Cloudera Manager "Block Count" errors like:

Block Count
The DataNode has <x million> blocks. Critical threshold: 850,000 block(s).

1. The DataNode logs all show errors like:

<TIMESTAMP> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
<HOSTNAME>.<DOMAIN>:1004:DataXceiver error processing WRITE_BLOCK operation src: /<PRIVATE_IP_HOST>:<PORT> dst: /<PRIVATE_IP_HOSTx>:1004
java.io.IOException: Not ready to serve the block pool, BP-<##>-<PRIVATE_IP_HOSTy>-<##>.

2. It looks like the upgrade added more than a million blocks to each DataNode.

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Cause
Solution


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.