Spark - Error “A master URL must be set in your configuration” when submitting an app

后端 未结 16 1951
爱一瞬间的悲伤
爱一瞬间的悲伤 2020-12-02 07:31

I have an Spark app which runs with no problem in local mode,but have some problems when submitting to the Spark cluster.

The error msg are as follows:



        
相关标签:
16条回答
  • 2020-12-02 08:07

    Replacing :

    SparkConf sparkConf = new SparkConf().setAppName("SOME APP NAME");
    WITH
    SparkConf sparkConf = new SparkConf().setAppName("SOME APP NAME").setMaster("local[2]").set("spark.executor.memory","1g");
    

    Did the magic.

    0 讨论(0)
  • 2020-12-02 08:10

    If you don't provide Spark configuration in JavaSparkContext then you get this error. That is: JavaSparkContext sc = new JavaSparkContext();

    Solution: Provide JavaSparkContext sc = new JavaSparkContext(conf);

    0 讨论(0)
  • 2020-12-02 08:14

    Tried this option in learning Spark processing with setting up Spark context in local machine. Requisite 1)Keep Spark sessionr running in local 2)Add Spark maven dependency 3)Keep the input file at root\input folder 4)output will be placed at \output folder. Getting max share value for year. down load any CSV from yahoo finance https://in.finance.yahoo.com/quote/CAPPL.BO/history/ Maven dependency and Scala code below -

    <dependencies>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.11</artifactId>
                <version>2.4.3</version>
                <scope>provided</scope>
            </dependency>
        </dependencies>   
    
    object MaxEquityPriceForYear {
      def main(args: Array[String]) {
        val sparkConf = new SparkConf().setAppName("ShareMaxPrice").setMaster("local[2]").set("spark.executor.memory", "1g");
        val sc = new SparkContext(sparkConf);
        val input = "./input/CAPPL.BO.csv"
        val output = "./output"
        sc.textFile(input)
          .map(_.split(","))
          .map(rec => ((rec(0).split("-"))(0).toInt, rec(1).toFloat))
          .reduceByKey((a, b) => Math.max(a, b))
          .saveAsTextFile(output)
      }
    
    0 讨论(0)
  • 2020-12-02 08:18

    Worked for me after replacing

    SparkConf sparkConf = new SparkConf().setAppName("SOME APP NAME");
    

    with

    SparkConf sparkConf = new SparkConf().setAppName("SOME APP NAME").setMaster("local[2]").set("spark.executor.memory","1g");
    

    Found this solution on some other thread on stackoverflow.

    0 讨论(0)
提交回复
热议问题