scala

How to kill a Play Framework process?

拜拜、爱过 提交于 2021-01-28 12:31:52
问题 I closed the terminal window by mistake and I don't know the PID of the running Play process. How to find it? Or, where is the RUNNING_PID file? I am using Play 2.4.6 and running in non-production mode ( activator run ). 回答1: When using dev mode ( activator run ), no RUNNING_PID file is generated. The process won't detach and will be killed when the terminal is closed. By default the RUNNING_PID file is written to ./target/universal/stage/RUNNING_PID (inside the project's root directory) when

Creating a Random Feature Array in Spark DataFrames

自闭症网瘾萝莉.ら 提交于 2021-01-28 12:15:56
问题 When creating an ALS model, we can extract a userFactors DataFrame and an itemFactors DataFrame. These DataFrames contain a column with an Array. I would like to generate some random data and union it to the userFactors DataFrame. Here is my code: val df1: DataFrame = Seq((123, 456, 4.0), (123, 789, 5.0), (234, 456, 4.5), (234, 789, 1.0)).toDF("user", "item", "rating") val model1 = (new ALS() .setImplicitPrefs(true) .fit(df1)) val iF = model1.itemFactors val uF = model1.userFactors I then

Could not connect to ZooKeeper using Solr in localhost

帅比萌擦擦* 提交于 2021-01-28 11:45:27
问题 I'm using Solr 6 and I'm trying to populate it. Here's the main scala I put in place : object testChildDocToSolr { def main(args: Array[String]): Unit = { setProperty("hadoop.home.dir", "c:\\winutils\\") val sparkSession = SparkSession.builder() .appName("spark-solr-tester") .master("local") .config("spark.ui.enabled", "false") .config("spark.default.parallelism", "1") .getOrCreate() val sc = sparkSession.sparkContext val collectionName = "testChildDocument" val testDf = sparkSession.read

How map work on Options in Scala?

烂漫一生 提交于 2021-01-28 11:44:59
问题 I have this two functions def pattern(s: String): Option[Pattern] = try { Some(Pattern.compile(s)) } catch { case e: PatternSyntaxException => None } and def mkMatcher(pat: String): Option[String => Boolean] = pattern(pat) map (p => (s: String) => p.matcher(s).matches) Map is the higher-order function that applies a given function to each element of a list. Now I am not getting that how map is working here as per above statement. 回答1: Map is the higher-order function that applies a given

Could not connect to ZooKeeper using Solr in localhost

自古美人都是妖i 提交于 2021-01-28 11:38:21
问题 I'm using Solr 6 and I'm trying to populate it. Here's the main scala I put in place : object testChildDocToSolr { def main(args: Array[String]): Unit = { setProperty("hadoop.home.dir", "c:\\winutils\\") val sparkSession = SparkSession.builder() .appName("spark-solr-tester") .master("local") .config("spark.ui.enabled", "false") .config("spark.default.parallelism", "1") .getOrCreate() val sc = sparkSession.sparkContext val collectionName = "testChildDocument" val testDf = sparkSession.read

Why is this implicit resolution failing?

无人久伴 提交于 2021-01-28 11:26:38
问题 I have an implicit conversion - below - which feels like it should definitely be working but is definitely not. Can anyone shed any light? I know implicitly can sometimes fail when type refinements are used - is that the issue here? trait GetItem[A[_], T, R] { type Out def ret(a: A[T], ref: R): Out } object GetItem { implicit def ifRefIsInt[A[_], T]: GetItem[A, T, Int] { type Out = A[T] } = new GetItem[A, T, Int] { type Out = A[T] def ret(a: A[T], ref: Int): Out = a } } import GetItem._ /

How to import StructuredArgument for structured logging in scala using slf4j and logback

旧时模样 提交于 2021-01-28 09:56:44
问题 This is probably a stupid question, but my scala knowledge is a bit lacking. I'm trying to implement structured logging in scala, and we're using slf4j/logback/logstash. I came across the following post: How does SLF4J support structured logging Which describes how to do it: import static net.logstash.logback.argument.StructuredArguments.*; /* * Add "name":"value" ONLY to the JSON output. * * Since there is no parameter for the argument, * the formatted message will NOT contain the key/value.

How to import StructuredArgument for structured logging in scala using slf4j and logback

喜你入骨 提交于 2021-01-28 09:53:04
问题 This is probably a stupid question, but my scala knowledge is a bit lacking. I'm trying to implement structured logging in scala, and we're using slf4j/logback/logstash. I came across the following post: How does SLF4J support structured logging Which describes how to do it: import static net.logstash.logback.argument.StructuredArguments.*; /* * Add "name":"value" ONLY to the JSON output. * * Since there is no parameter for the argument, * the formatted message will NOT contain the key/value.

Unresolved Dependencies sbt with play framework

南楼画角 提交于 2021-01-28 09:14:32
问题 As i am new to Stack Overflow please be patient i am working on a project with Play 2.5 exactly the starter example from the Website. As i have to work with ebean i followed the Steps of Setting ebean in the plugins.sbt as like addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "3.0.0") and also in my build.sbt file name := """play-java""" version := "1.0-SNAPSHOT" lazy val root = (project in file(".")).enablePlugins(PlayJava, PlayEbean) scalaVersion := "2.11.11" libraryDependencies +=

Facing ReferenceError while running tests with ScalaJS Bundler

青春壹個敷衍的年華 提交于 2021-01-28 08:47:36
问题 I am facing this issue when upgrading from sbt-scalajs 0.6.x to 1.2.0 and the issue is:- With sbt-scalajs v0.6.26 (and sbt-scalajs-bundler v0.14.0 ), I have enabled the jsdom support for tests: requireJsDomEnv in Test := true And test suites are running fine. But with sbt-scalajs v1.2.0 (and sbt-scalajs-bundler v0.18.0 ), I have enabled the jsdom support for tests too: requireJsDomEnv in Test := true But this is giving me the following error: [info] Writing and bundling the test loader