scala

Scala - what is the benefit of Auxiliary constructors always having to call another constructor?

我只是一个虾纸丫 提交于 2021-02-10 06:24:00
问题 Coming from the Java world, I don't see how the restrictions on the auxiliary constructors in Scala are helpful .. In Java, I know we can have multiple constructors as long as their signatures are different. In Scala, the first call in an auxiliary constructor needs to be another auxiliary constructor or the class's primary constructor. Why? Doesn't this make Scala more restrictive? 回答1: Scala essentially guarantees that the primary constructor will always be called, so it gives a single

How to provide a stub implementation of JDK classes (like java.awt) in a Scala.js project?

本秂侑毒 提交于 2021-02-10 06:11:58
问题 Here is my attempt to provide a dummy implementation of a part of java.awt related to Graphics2D : package java package object awt { object RenderingHints { type Key = Int val KEY_TEXT_ANTIALIASING = 0 val VALUE_TEXT_ANTIALIAS_ON = 0 } object Color { val GREEN = 0 } type Color = Int object image { object BufferedImage { val TYPE_INT_RGB = 0 } class BufferedImage(w: Int, h: Int, tpe: Int) { def createGraphics: Graphics2D = new Graphics2D } } class Graphics2D { def setColor(c: Color): Unit = ()

Cannot deserialize a tuple

泄露秘密 提交于 2021-02-10 05:23:54
问题 If I do the following: import org.json4s.DefaultFormats import org.json4s.jackson.Serialization.{read, write} implicit val formats = DefaultFormats val tuple = (5.0, 5.0) val json = write(tuple) println("Write: " + json) println("Read: " + read[(Double, Double)](json)) I get the following output: Write: {"_1$mcD$sp":5.0,"_2$mcD$sp":5.0} Exception in thread "main" org.json4s.package$MappingException: No usable value for _1 Did not find value which can be converted into double at org.json4s

IllegalArgumentException when computing a PCA with Spark ML

試著忘記壹切 提交于 2021-02-10 05:08:42
问题 I have a parquet file containing the id and features columns and I want to apply the pca algorithm. val dataset = spark.read.parquet("/usr/local/spark/dataset/data/user") val features = new VectorAssembler() .setInputCols(Array("id", "features" )) .setOutputCol("features") val pca = new PCA() .setInputCol("features") .setK(50) .fit(dataset) .setOutputCol("pcaFeatures") val result = pca.transform(dataset).select("pcaFeatures") pca.save("/usr/local/spark/dataset/out") but I have this exception

How to obtain the average of an array-type column in scala-spark over all row entries per entry?

一世执手 提交于 2021-02-10 04:57:27
问题 I got an array column with 512 double elements, and want to get the average. Take an array column with length=3 as example: val x = Seq("2 4 6", "0 0 0").toDF("value").withColumn("value", split($"value", " ")) x.printSchema() x.show() root |-- value: array (nullable = true) | |-- element: string (containsNull = true) +---------+ | value| +---------+ |[2, 4, 6]| |[0, 0, 0]| +---------+ The following result is desired: x.select(..... as "avg_value").show() ------------ |avg_value | ------------

How to obtain the average of an array-type column in scala-spark over all row entries per entry?

回眸只為那壹抹淺笑 提交于 2021-02-10 04:56:51
问题 I got an array column with 512 double elements, and want to get the average. Take an array column with length=3 as example: val x = Seq("2 4 6", "0 0 0").toDF("value").withColumn("value", split($"value", " ")) x.printSchema() x.show() root |-- value: array (nullable = true) | |-- element: string (containsNull = true) +---------+ | value| +---------+ |[2, 4, 6]| |[0, 0, 0]| +---------+ The following result is desired: x.select(..... as "avg_value").show() ------------ |avg_value | ------------

Relation between Function1 and Reader Monad

我的梦境 提交于 2021-02-10 04:18:02
问题 although i understand the implementation of the reader monad of which i give 2 of the most prominent way to do it below case class Reader[R, A](run: R => A) def readerMonad[R] = new Monad[({type f[x] = Reader[R,x]})#f] { def unit[A](a: => A): Reader[R, A] = Reader(_ => a) override def flatMap[A,B](st: Reader[R, A])(f: A => Reader[R, B]): Reader[R, B] = Reader(r => f(st.run(r)).run(r)) } or more simply case class Reader[R, A](run: R => A) { def map[B](f: A => B): Reader[R, B] = Reader(r => f

How to subtract two consecutive element in a list in Scala?

五迷三道 提交于 2021-02-10 04:14:08
问题 I would like to subtract two consecutive element in a list with numbers in Scala. For example : I have this list : val sortedList = List(4,5,6) I would like to have an output list like diffList =(1, 1) where 5-4 = 1 and 6-5 = 1 . I tried the following code: var sortedList = List[Int]() var diffList = List[Int]() for (i <- 0 to (sortedList.length - 1) ;j <- i + 1 to sortedList.length - 1) { val diff = (sortedList(j) - sortedList(i)) diffList = diffList :+ diff } I have the following result for

Relation between Function1 and Reader Monad

邮差的信 提交于 2021-02-10 04:13:49
问题 although i understand the implementation of the reader monad of which i give 2 of the most prominent way to do it below case class Reader[R, A](run: R => A) def readerMonad[R] = new Monad[({type f[x] = Reader[R,x]})#f] { def unit[A](a: => A): Reader[R, A] = Reader(_ => a) override def flatMap[A,B](st: Reader[R, A])(f: A => Reader[R, B]): Reader[R, B] = Reader(r => f(st.run(r)).run(r)) } or more simply case class Reader[R, A](run: R => A) { def map[B](f: A => B): Reader[R, B] = Reader(r => f

How to subtract two consecutive element in a list in Scala?

你说的曾经没有我的故事 提交于 2021-02-10 04:13:49
问题 I would like to subtract two consecutive element in a list with numbers in Scala. For example : I have this list : val sortedList = List(4,5,6) I would like to have an output list like diffList =(1, 1) where 5-4 = 1 and 6-5 = 1 . I tried the following code: var sortedList = List[Int]() var diffList = List[Int]() for (i <- 0 to (sortedList.length - 1) ;j <- i + 1 to sortedList.length - 1) { val diff = (sortedList(j) - sortedList(i)) diffList = diffList :+ diff } I have the following result for