Spark : how to run spark file from spark shell

别来无恙 提交于 2019-11-26 11:53:55

问题



I am using CDH 5.2. I am able to use spark-shell to run the commands.

  1. How can I run the file(file.spark) which contain spark commands.
  2. Is there any way to run/compile the scala programs in CDH 5.2 without sbt?

Thanks in advance


回答1:


To load an external file from spark-shell simply do

:load PATH_TO_FILE

This will call everything in your file.

I don't have a solution for your SBT question though sorry :-)




回答2:


In command line, you can use

spark-shell -i file.scala

to run code which is written in file.scala




回答3:


You can use either sbt or maven to compile spark programs. Simply add the spark as dependency to maven

<repository>
      <id>Spark repository</id>
      <url>http://www.sparkjava.com/nexus/content/repositories/spark/</url>
</repository>

And then the dependency:

<dependency>
      <groupId>spark</groupId>
      <artifactId>spark</artifactId>
      <version>1.2.0</version>
</dependency>

In terms of running a file with spark commands: you can simply do this:

echo"
   import org.apache.spark.sql.*
   ssc = new SQLContext(sc)
   ssc.sql("select * from mytable").collect
" > spark.input

Now run the commands script:

cat spark.input | spark-shell



回答4:


Just to give more perspective to the answers

Spark-shell is a scala repl

You can type :help to see the list of operation that are possible inside the scala shell

scala> :help
All commands can be abbreviated, e.g., :he instead of :help.
:edit <id>|<line>        edit history
:help [command]          print this summary or command-specific help
:history [num]           show the history (optional num is commands to show)
:h? <string>             search the history
:imports [name name ...] show import history, identifying sources of names
:implicits [-v]          show the implicits in scope
:javap <path|class>      disassemble a file or class name
:line <id>|<line>        place line(s) at the end of history
:load <path>             interpret lines in a file
:paste [-raw] [path]     enter paste mode or paste a file
:power                   enable power user mode
:quit                    exit the interpreter
:replay [options]        reset the repl and replay all previous commands
:require <path>          add a jar to the classpath
:reset [options]         reset the repl to its initial state, forgetting all session entries
:save <path>             save replayable session to a file
:sh <command line>       run a shell command (result is implicitly => List[String])
:settings <options>      update compiler options, if possible; see reset
:silent                  disable/enable automatic printing of results
:type [-v] <expr>        display the type of an expression without evaluating it
:kind [-v] <expr>        display the kind of expression's type
:warnings                show the suppressed warnings from the most recent line which had any

:load interpret lines in a file




回答5:


Tested on both spark-shell version 1.6.3 and spark2-shell version 2.3.0.2.6.5.179-4, you can directly pipe to the shell's stdin like

spark-shell <<< "1+1"

or in your use case,

spark-shell < file.spark


来源:https://stackoverflow.com/questions/27717379/spark-how-to-run-spark-file-from-spark-shell

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!