After Upgrade to BDA V4.5 DataNodes Fail to Start with: "org.apache.hadoop.hdfs.server.common.Storage: Storage directory [DISK]file:/u01/hadoop/dfs/ has already been used" (Doc ID 2170965.1)

Last updated on AUGUST 11, 2016

Applies to:

Big Data Appliance Integrated Software - Version 4.1.0 and later
Linux x86-64

Symptoms

After upgrade to BDA V4.5/CDH 5.7.0 from BDA V4.1.0/CDH 5.3.2 a DataNode fails to come up.  Instead the error below is raised:

2016-08-06 08:27:16,683 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage for block pool: BP-1575072498-1**.***.**.**-1447645056196 : BlockPoolSliceStorage.recoverTransitionR
ead: attempt to load an used block storage: /u02/hadoop/dfs/current/BP-1575072498-1**.***.**.**-1447645056196
2016-08-06 08:27:16,683 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory [DISK]file:/u01/hadoop/dfs/ has already been used.
2016-08-06 08:27:16,699 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-1575072498-1**.***.**.**-1447645056196
2016-08-06 08:27:16,699 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to analyze storage directories for block pool BP-1575072498-1**.***.**.**-1447645056196
java.io.IOException: BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /u01/hadoop/dfs/current/BP-1575072498-1**.***.**.**-1447645056196
  at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:212)
  at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:244)
  at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
  at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1394)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1355)
  at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
  at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:228)
  at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
  at java.lang.Thread.run(Thread.java:745)
2016-08-06 08:27:16,699 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage for block pool: BP-1575072498-1**.***.**.**-1447645056196 : BlockPoolSliceStorage.recoverTransitionR
ead: attempt to load an used block storage: /u01/hadoop/dfs/current/BP-1575072498-1**.***.**.**-1447645056196
2016-08-06 08:27:16,700 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to bda1node01.example.com/1**.***.**.**:8022. Exiting.
java.io.IOException: All specified directories are failed to load.
  at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1394)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1355)
  at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
  at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:228)
  at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
  at java.lang.Thread.run(Thread.java:745)

 

Cause

Sign In with your My Oracle Support account

Don't have a My Oracle Support account? Click to get started

My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms