Oracle Loader For Hadoop Job Fails With Java.lang.OutOfMemoryError: GC Overhead Limit Exceeded
(Doc ID 1966732.1)
Last updated on NOVEMBER 08, 2022
Applies to:
Oracle Loader for Hadoop - Version 3.2.0 and laterInformation in this document applies to any platform.
Symptoms
While using OLH connector to create datapump files inside HDFS for Hive tables. the OLH job fails with following error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDDLFromFieldSchema(MetaStoreUtils.java:494)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getSchema(MetaStoreUtils.java:711)
at oracle.hadoop.loader.lib.input.HiveInputSplit.(HiveInputSplit.java:166)
at oracle.hadoop.loader.lib.input.HiveInputSplit.(HiveInputSplit.java:232)
at oracle.hadoop.loader.lib.input.HiveInputSplit.(HiveInputSplit.java:253)
at oracle.hadoop.loader.lib.input.HiveToAvroInputFormat.addSplits(HiveToAvroInputFormat.java:630)
at oracle.hadoop.loader.lib.input.HiveToAvroInputFormat.getSplits(HiveToAvroInputFormat.java:427)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1079)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1096)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:177)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:995)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948)
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDDLFromFieldSchema(MetaStoreUtils.java:494)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getSchema(MetaStoreUtils.java:711)
at oracle.hadoop.loader.lib.input.HiveInputSplit.(HiveInputSplit.java:166)
at oracle.hadoop.loader.lib.input.HiveInputSplit.(HiveInputSplit.java:232)
at oracle.hadoop.loader.lib.input.HiveInputSplit.(HiveInputSplit.java:253)
at oracle.hadoop.loader.lib.input.HiveToAvroInputFormat.addSplits(HiveToAvroInputFormat.java:630)
at oracle.hadoop.loader.lib.input.HiveToAvroInputFormat.getSplits(HiveToAvroInputFormat.java:427)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1079)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1096)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:177)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:995)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948)
Cause
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |
In this Document
Symptoms |
Cause |
Solution |
References |