sbt-assembly

trying to use sbt assembly

China☆狼群 提交于 2019-12-12 02:57:42
问题 sbt version is 0.13.9 and scala 2.11.7/ I know previous versions of sbt relied on scala 2.10 - is that still the case? I have a Java project for which I added an assembly.sbt file in the project directory (as per the sbt assembly instructions for this version of sbt assembly): addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.2") I ran sbt reload/clean as well as compile. However, when I try to run assembly, I get the following exception: > assembly [error] Not a valid command: assembly

Where do you put assemblyMergeStrategy in build.sbt?

给你一囗甜甜゛ 提交于 2019-12-11 20:39:06
问题 I have a MergeStrategy problem. How do I resolve it? Why are all those squiggly lines there? The error message is Type mismatch, expected: String => MergeStrategy, actual: String => Any I am new to scala, so, I have no idea what that syntax means. I have tried copying different merge strategies from all over stackoverflow and none and them work. I have scala version 2.12.7 and sbt version 1.2.6. My build.sbt looks like this: lazy val root = (project in file(".")). settings( name := "bigdata

Error about sbt yarn at using spark

六眼飞鱼酱① 提交于 2019-12-11 17:37:41
问题 hi when i am writing this code >sbt And after seeing this result beyhan@beyhan:~/sparksample$ sbt Starting sbt: invoke with -help for other options [info] Set current project to Spark Sample (in build file:/home/beyhan/sparksample/) And after i am writing this code >compile And i am getting this error [error] {file:/home/beyhan/sparksample/}default-f390c8/*:update: sbt.ResolveException: unresolved dependency: org.apache.hadoop#hadoop-yarn-common;1.0.4: not found [error] unresolved dependency:

What does double colon (or colon-colon) :: mean in Scala?

你。 提交于 2019-12-11 15:56:14
问题 I was having an issue with the sbt build of my Scala project ( duplicate entry: META-INF/MANIFEST.MF ) and the following lines solved the problem: assemblyMergeStrategy in assembly := { case PathList("META-INF", xs @ _*) => (xs map {_.toLowerCase}) match { case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) => MergeStrategy.discard case _ => MergeStrategy.last } } I am now trying to understand what the double colon means in the above context. I found an answer in

Spark org.apache.jena.shared.NoReaderForLangException: Reader not found: JSON-LD

冷暖自知 提交于 2019-12-11 06:46:40
问题 I am using Jena in Spark. I am facing a weird issue when I am deploying on the cluster (Does not happen on local Dev, for which i do not need to build an uber jar) When I deploy on the cluster i get the following exceptions: Caused by: org.apache.jena.shared.NoReaderForLangException: Reader not found: JSON-LD at org.apache.jena.rdf.model.impl.RDFReaderFImpl.getReader(RDFReaderFImpl.java:61) at org.apache.jena.rdf.model.impl.ModelCom.read(ModelCom.java:305) 1 - I wonder, generally speaking

spark-submit fails when case class fields are reserved java keywords with backticks

偶尔善良 提交于 2019-12-11 04:38:04
问题 I have backticks used for reserved keyword. One example for the case class is as follows: case class IPC( `type`: String, main: Boolean, normalized: String, section:String, `class`: String, subClass: String, group:String, subGroup: String ) I have declared the sparksession as follows: def run(params: SparkApp.Params): Unit ={ val sparkSession = SparkSession.builder.master("local[*]").appName("SparkUsptoParser").getOrCreate() // val conf = new SparkConf().setAppName("SparkUsptoParser").set(

parboiled2 and Spray cause conflicting cross-version suffixes

限于喜欢 提交于 2019-12-10 21:18:06
问题 I'm trying to add parboiled2 as a dependency to my project, and follow the Calculator example but it conflicts with spray. My current build.sbt file includes: "io.spray" %% "spray-json" % "1.3.1" withSources() withJavadoc(), "io.spray" %% "spray-can" % sprayV withSources() withJavadoc(), "io.spray" %% "spray-routing" % sprayV withSources() withJavadoc(), "io.spray" %% "spray-testkit" % sprayV % "test" withSources() withJavadoc(), When I add "org.parboiled" %% "parboiled" % "2.0.1" withSources

Spark fat jar to run multiple versions on YARN

浪子不回头ぞ 提交于 2019-12-10 20:57:28
问题 I have an older version of Spark setup with YARN that I don't want to wipe out but still want to use a newer version. I found a couple posts referring to how a fat jar can be used for this. Many SO posts point to either maven(officially supported) or sbt to build a fat jar because it's not directly available for download. There seem to be multiple plugins to do it using maven: maven-assembly-plugin, maven-shade-plugin, onejar-maven-plugin etc. However, I can't figure out if I really need a

How should we address local dependencies in sbt files for Spark

坚强是说给别人听的谎言 提交于 2019-12-10 19:36:06
问题 I have this sbt file: offline := true name := "hello" version := "1.0" scalaVersion := "2.11.7-local" scalaHome := Some(file("/home/ubuntu/software/scala-2.11.7")) libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.0" % "provided" How can I tell it to use this address for Spark rather than using web? /home/ubuntu/software/spark-1.5.0-bin-hadoop2.6 Because it just tries to connect to internet for Spark dependencies and my VM doesn't have internet access due to security issues. I

Resolving Dependencies in creating JAR through SBT assembly

时光毁灭记忆、已成空白 提交于 2019-12-10 17:49:46
问题 I want to create a big Jar file. for which am trying to use SBT ASSEMBLY . I installed sbt-assembly from GitHub and this answer. When I ran sbt assembly , I got this error: java.lang.RuntimeException: deduplicate: different file contents found in the following: /home/UserName/.ivy2/cache/org.eclipse.jetty.orbit/javax.servlet/orbits/javax.servlet-2.5.0.v201103041518.jar:javax/servlet/SingleThreadModel.class /home/UserName/.ivy2/cache/org.mortbay.jetty/servlet-api/jars/servlet-api-2.5-20081211