My Oracle Support Banner

"ODIKM-SPARK-SYNC-10000: EKM Command Failed with Exception: java.lang.Exception: see Details and URL for more information. Rerunning as standalone or yarn-client may also provide more information" Error Thrown when Calling Spark Python Script from ODI 12c (Doc ID 2532964.1)

Last updated on MARCH 12, 2021

Applies to:

Oracle Data Integrator - Version 12.2.1.3.0 and later
Information in this document applies to any platform.

Symptoms

NOTE: In the example below, any details represent a fictitious sample (based upon made up data). Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.

When attempting to run a command using the Oracle Data Integrator (ODI) 12c OdiOSCommand in a Package with a command similar to the following:

/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/spark/bin/spark-submit --master yarn-client --deploy-mode cluster --py-files /tmp/pyspark_ext.py --executor-memory 1G --verbose --driver-memory 512M --executor-cores 1 --driver-cores 1 --num-executors 2 --queue default /tmp/Load_Hivetohive_Spark_Physical.py

... the following errors occurs:

ODI-1590: The execution of the script failed.
ODIKM-SPARK-SYNC-10000: EKM Command Failed with Exception: java.lang.Exception: see Details and URL for more information. Rerunning as standalone or yarn-client may also provide more information
ODI-30038: OS command returned 1. Error details are [WARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/spark) overrides detected (/opt/cloudera/parcels/CDH/lib/spark).
WARNING: Running spark-class from user-defined location.
Using properties file: /opt/cloudera/parcels/CDH/lib/spark/conf/spark-defaults.conf
Adding default property: spark.lineage.log.dir=/var/log/spark/lineage
Adding default property: spark.serializer=org.apache.spark.serializer.KryoSerializer

  == and ==

ODI-1590: The execution of the script failed.
ODIKM-SPARK-SYNC-10000: EKM Command Failed with Exception: java.lang.Exception: Traceback (most recent call last):
3 File "/tmp/Testing_t1_t2_spark_Physical.py", line 25, in
4 sparkVersion = sparkVersionNum(sc.version)
5 File "/tmp/pyspark_ext.py", line 28, in sparkVersionNum
6 return reduce(lambda sum, elem: sum*10 + elem, map(lambda x: int(x) if x.isdigit() else 0, v.split('.')), 0)
7 NameError: name 'reduce' is not defined

Changes

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.