After upgrade to BDA 4.5.0 Oozie Cluster Verification Check Pig Job Fails with "at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded" (Doc ID 2153876.1)

Last updated on JUNE 25, 2016

Applies to:

Big Data Appliance Integrated Software - Version 4.5.0 and later
Linux x86-64

Symptoms

After upgrade to BDA 4.5 the Oozie cluster verification test fails on the pig job. This is reproduced by running he Oozie cluster verification test standalone using: How to Run the Oozie Cluster Verification Tests Standalone on the BDA(Doc ID 2018885.1).

The failed Oozie workflow test output looks like:

------------------------------------------------------------------------------
Actions
------------------------------------------------------------------------------------------------------------------------------------
ID Status Ext ID Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0000000-160527165300951-oozie-oozi-W@:start: OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0000000-160527165300951-oozie-oozi-W@pig-node ERROR job_1464360578138_0003 FAILED/KILLED-
------------------------------------------------------------------------------------------------------------------------------------
0000000-160527165300951-oozie-oozi-W@cleanup-node OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0000000-160527165300951-oozie-oozi-W@fail OK - OK E0729
------------------------------------------------------------------------------------------------------------------------------------

 

From HUE job browser, the failed task attempt shows:

at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
at org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:99)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:136)
at org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.init(MapTask.java:836)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:447)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:388)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:302)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:187)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:230)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

 

Cause

Sign In with your My Oracle Support account

Don't have a My Oracle Support account? Click to get started

My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms