--archives
, --files
, --py-files
and sc.addFile
and sc.addPyFile
are quite confusing, can someone explain these clearly?
These options are truly scattered all over the place.
In general, add your data files via --files
or --archives
and code files via --py-files
. The latter will be added to the classpath (c.f., here) so you could import and use.
As you can imagine, the CLI arguments is actually dealt with by addFile
and addPyFiles
functions (c.f., here)
Behind the scenes,
pyspark
invokes the more generalspark-submit
script.You can add Python .zip, .egg or .py files to the runtime path by passing a comma-separated list to
--py-files
The
--files
and--archives
options support specifying file names with the # similar to Hadoop. For example you can specify: --files localtest.txt#appSees.txt and this will upload the file you have locally named localtest.txt into HDFS but this will be linked to by the name appSees.txt, and your application should use the name as appSees.txt to reference it when running on YARN.
addFile(path)
Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.
addPyFile(path)
Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.
来源:https://stackoverflow.com/questions/38066318/whats-the-difference-between-archives-files-py-files-in-pyspark-job-argum