${jobTracker}
${nameNode}
&l
If you ever sort out the confusion between Sqoop1 and Sqoop2, then you will have to jump over the Oozie hurdle too.
My advice: stop toying with command-line arguments and use standard Hadoop config files.
1. On your Gateway node (for unit tests), edit /etc/sqoop/conf/sqoop-site.xml
to set Client properties
...
sqoop.metastore.client.enable.autoconnect
true
sqoop.metastore.client.autoconnect.url
jdbc:hsqldb:hsql://FQDN:16000/sqoop
sqoop.metastore.client.autoconnect.username
sa
sqoop.metastore.client.autoconnect.password
sqoop.metastore.client.record.password
false
1b. Upload that file to HDFS somewhere (for use with Oozie jobs)
2. On the node that will actually run the global Metastore DB, also edit the file, and also add two extra Server properties (in this example the DB files are stored in /var/lib/...
)
sqoop.metastore.server.port
16000
sqoop.metastore.server.location
/var/lib/sqoop/data/shared.db
2b. Make sure you CHECKPOINT that database from time to time (to reset the "script" file and flush the "redo log" file) then backup the "script" file as a snapshot of the current DB state, somewhere safe, in case you lose the node and its disk -- yes, these things happen
3. In your Oozie Sqoop actions, set the Client properties with a
entry targeting the config file in HDFS.
If you are interested in the actual Sqoop source code that handles these props and the Metastore client, look there