When HDFS Components Startup failed, the FATAL Log Entries are not Extracted to CM Events
(Doc ID 2373433.1)
Last updated on DECEMBER 05, 2021
Applies to:
Big Data Appliance Integrated Software - Version 4.6.0 and laterLinux x86-64
Symptoms
In order to extract FATAL log entries from log files for Cloudera Manager (CM) Events, "Rules to Extract Events from Log Files" for HDFS components (NameNode, DataNode, etc.) was configured.
However if a component start up fails with ""Address already in use" the FATAL error can not be correctly extracted from the CM Events.
Below are the steps to see the issue. The test uses a DataNode which fails to start with "Address already in use".
1. Check the "Rules to Extract Events from Log Files" setting for DataNode.
In CM navigate: hdfs -> Configuration -> Search: "Rules to Extract Events from Log Files" and confirm the Rule setting for "DataNode Default Group". Make sure that the Alert checkbox is checked and Threshold is set to "Fatal".
2. Save the setting (Save Changes) and restart Stale Services from CM.
3. Stop a DataNode from one cluster node.
4. Execute following command on the node where the DataNode was stopped to establish the DataNode port.
7. Checking the CM Events by navigating: (Diagnostics -> Events), finds that no FATAL alert can be extracted from the above log file.
Changes
N/A
Cause
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |
In this Document
Symptoms |
Changes |
Cause |
Solution |