I\'m building apache-spark application with Apache Spark Hive. So far everything was ok - I\'ve been running tests and whole application in Intellij IDEA and all tests toget
I have found solution to my problem. Solution is described in answer to original question for datanucleus in executable jar: https://stackoverflow.com/a/27030103/6390361
Edit MANIFEST.MF to pretend that it is datanucleus OSGi bundle. This can be done by adding Bundle-SymbolicName and Premain-Class entries from datanucleus-core manifest.
Create file plugin.xml in your classpath (resource folder) and use root tag from datanucleus-core project.
Put all extension-point tags from datanucleus-core and datanucleus-rdbms at the beginning of the plugin tag. All extension-points from RDBMS projects has to be prefixed with store.rdbms. This is very important because datanucleus uses fully classified IDs including part from root plugin tag.
Merge all extension tags from projects datanucleus-core, datanucleus-rdbms and datanucleus-api-jdo and put them behind all extension points. Be careful some extensions are present in more projects so you need to merge content of extensions with same IDs.
Manifest entries
Bundle-SymbolicName: org.datanucleus;singleton:=true
Premain-Class: org.datanucleus.enhancer.DataNucleusClassFileTransformer
plugin.xml
File plugin.xml is too big to be pasted here but you should be able to merge it by hand. Following code contains all RDBMS extension points with fixed IDs.
...
...
...
maven-shade-plugin
org.apache.maven.plugins
maven-shade-plugin
package
shade
${main.class}
org.datanucleus.enhancer.DataNucleusClassFileTransformer
org.datanucleus;singleton:=true
reference.conf
*:*
META-INF/*.SF
META-INF/*.DSA
META-INF/*.RSA