Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.1.0, 1.2.0
-
None
Description
If JAVA_HOME points to a JRE instead of a JDK, e.g.
JAVA_HOME=/usr/lib/jvm/java-7-oracle/jre/
instead of
JAVA_HOME=/usr/lib/jvm/java-7-oracle/
Then start-thriftserver.sh will fail with Datanucleus JAR errors:
Caused by: java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at javax.jdo.JDOHelper$18.run(JDOHelper.java:2018) at javax.jdo.JDOHelper$18.run(JDOHelper.java:2016) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.forName(JDOHelper.java:2015) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1162)
The root problem seems to be that compute-classpath.sh uses JAVA_HOME to find the path to the jar command, which isn't present in JRE directories. This leads to silent failures when adding the Datanucleus JARs to the classpath.
This same issue presumably affects the command that checks whether Spark was built on Java 7 but run on Java 6.
We should probably add some error-handling that checks whether the jar command is actually present and throws an error otherwise, and also update the documentation to say that `JAVA_HOME` must point to a JDK and not a JRE.