scala

Scala's Nothing vs partial unification

て烟熏妆下的殇ゞ 提交于 2021-02-08 06:38:34
问题 I would expect the following code to compile just fine: trait Widen[M[_]] { def widen[A, B >: A](ma: M[A]): M[B] } object Widen { implicit class Ops[M[_], A](ma: M[A]) { def widen[B >: A](implicit ev: Widen[M]): M[B] = ev.widen[A, B](ma) } // implicit class OpsNothing[M[_]](ma: M[Nothing]) { // def widen[B](implicit ev: Widen[M]): M[B] = ev.widen(ma) // } implicit val WidenList = new Widen[List] { def widen[A, B >: A](l: List[A]): List[B] = l } } import Widen._ List.empty[Some[Int]].widen

System is not terminated in scala application in docker on GKE

こ雲淡風輕ζ 提交于 2021-02-08 06:24:37
问题 I have a scala application that uses Akka Streams and running as a cronjob in Google Kubernetes Engine. But the pod is still in the “Running” state (not completed). And the Java process is still running inside the container. Here's what I do exactly: I build the docker image with sbt-native-packager and sbt docker:publish . When the job is done, I terminate it with regular system.terminate call. implicit val system: ActorSystem = ActorSystem("actor-system") /* doing actual stuff */ stream

System is not terminated in scala application in docker on GKE

落爺英雄遲暮 提交于 2021-02-08 06:24:36
问题 I have a scala application that uses Akka Streams and running as a cronjob in Google Kubernetes Engine. But the pod is still in the “Running” state (not completed). And the Java process is still running inside the container. Here's what I do exactly: I build the docker image with sbt-native-packager and sbt docker:publish . When the job is done, I terminate it with regular system.terminate call. implicit val system: ActorSystem = ActorSystem("actor-system") /* doing actual stuff */ stream

Update multiple values in a sequence

空扰寡人 提交于 2021-02-08 05:47:44
问题 To get a sequence with one value updated, one can use seq.updated(index, value) I want to set a new value for a range of elements. Is there a library function for that? I currently use the following function: def updatedSlice[A](seq: List[A], ind: Iterable[Int], value: A): List[A] = if (ind.isEmpty) seq else updatedSlice(seq.updated(ind.head, value), ind.tail, value) Besides the need of writing function, this seems to be inefficient, and also works only for lists, rather than arbitrary

Converting String RDD to Int RDD

こ雲淡風輕ζ 提交于 2021-02-08 05:38:42
问题 I am new to scala..I want to know when processing large datasets with scala in spark is it possible to read as int RDD instead of String RDD I tried the below: val intArr = sc .textFile("Downloads/data/train.csv") .map(line=>line.split(",")) .map(_.toInt) But I am getting the error: error: value toInt is not a member of Array[String] I need to convert to int rdd because down the line i need to do the below val vectors = intArr.map(p => Vectors.dense(p)) which requires the type to be integer

how to add custom ValidationError in Json Reads in PlayFramework

放肆的年华 提交于 2021-02-08 05:35:10
问题 I am using play Reads validation helpers i want to show some custom message in case of json exception eg:length is minimum then specified or the given email is not valid , i knnow play displays the error message like this error.minLength but i want to display a reasonable message like please enter the character greater then 1 (or something ) here is my code case class DirectUserSignUpValidation(firstName: String, lastName: String, email: String, password: String) extends Serializable object

In DataFrame.withColumn, how can I check if the column's value is null as a condition for the second parameter?

六眼飞鱼酱① 提交于 2021-02-08 04:59:26
问题 If I have a DataFrame called df that looks like: +----+----+ | a1+ a2| +----+----+ | foo| bar| | N/A| baz| |null| etc| +----+----+ I can selectively replace values like so: val df2 = df.withColumn("a1", when($"a1" === "N/A", $"a2")) so that df2 looks like: +----+----+ | a1+ a2| +----+----+ | foo| bar| | baz| baz| |null| etc| +----+----+ but why can't I check if it's null, like: val df3 = df2.withColumn("a1", when($"a1" === null, $"a2")) so that I get: +----+----+ | a1+ a2| +----+----+ | foo|

Scala script wait for mongo to complete task

ぐ巨炮叔叔 提交于 2021-02-08 04:58:07
问题 I'm writing a simple scala-based script which supposed to insert some data into Mongo collection. The problem is, that script exits before mongo finishes it's task. What is the idiomatic/best approach to deal with the problem, considering following script: #!/usr/bin/env scalas /*** scalaVersion := "2.12.2" libraryDependencies ++= { Seq( "org.mongodb.scala" %% "mongo-scala-driver" % "2.1.0" ) } */ import org.mongodb.scala._ val mongoClient: MongoClient = MongoClient("mongodb://localhost") val

Scala: SeqT monad transformer?

廉价感情. 提交于 2021-02-08 04:48:09
问题 If we have such two functions... def findUserById(id: Long): Future[Option[User]] = ??? def findAddressByUser(user: User): Future[Option[Address]] = ??? ...then we are able to use cats OptionT monad transformer to write for-comprehension with them easily: for { user <- OptionT(findUserById(id)) address <- OptionT(findAddressByUser(user)) } ... I'd like to compose future of sequences this way, like this: def findUsersBySomeField(value: FieldValue): Future[Seq[User]] = ??? def

Scala: SeqT monad transformer?

佐手、 提交于 2021-02-08 04:47:34
问题 If we have such two functions... def findUserById(id: Long): Future[Option[User]] = ??? def findAddressByUser(user: User): Future[Option[Address]] = ??? ...then we are able to use cats OptionT monad transformer to write for-comprehension with them easily: for { user <- OptionT(findUserById(id)) address <- OptionT(findAddressByUser(user)) } ... I'd like to compose future of sequences this way, like this: def findUsersBySomeField(value: FieldValue): Future[Seq[User]] = ??? def