scala

Create table from slick table definition

冷暖自知 提交于 2021-01-28 05:24:33
问题 In PlaySlick sample there is file with sample data access object. https://github.com/playframework/play-slick/blob/master/samples/basic/app/dao/CatDAO.scala and table definition: private class CatsTable(tag: Tag) extends Table[Cat](tag, "CAT") { def name = column[String]("NAME", O.PrimaryKey) def color = column[String]("COLOR") def * = (name, color) <> (Cat.tupled, Cat.unapply) } Is it possible to generate a new table using this definition without using play evolutions? If not, why? 回答1:

Spark Streaming + Kafka Integration : Support new topic subscriptions without requiring restart of the streaming context

核能气质少年 提交于 2021-01-28 05:16:07
问题 I am using a spark streaming application(spark 2.1) to consume data from kafka(0.10.1) topics. I want to subscribe to new topic without restarting the streaming context . Is there any way to achieve this? I can see a jira ticket in apache spark project for the same (https://issues.apache.org/jira/browse/SPARK-10320),Even though it is closed in 2.0 version, I couldn't find any documentation or example to do this. If any of you are familiar with this, please provide me documentation link or

Sum one column values if other columns are matched

被刻印的时光 ゝ 提交于 2021-01-28 05:13:49
问题 I have a spark dataframe like this: word1 word2 co-occur ---- ----- ------- w1 w2 10 w2 w1 15 w2 w3 11 And my expected result is: word1 word2 co-occur ---- ----- ------- w1 w2 25 w2 w3 11 I tried dataframe's groupBy and aggregate functions but I couldn't come up with the solution. 回答1: You need a single column containing both words in sorted order, this column can then be used for the groupBy . You can create a new column with an array containing word1 and word as follows: df.withColumn(

generic collection generation with a generic type

别来无恙 提交于 2021-01-28 04:24:45
问题 Sometimes, I find myself wishing scala collections to include some missing functionality, and it's rather easy "extending" a collection, and provide a custom method. This is a bit more difficult when it comes to building the collection from scratch. Consider useful methods such as .iterate . I'll demonstrate the usecase with a similar, familiar function: unfold . unfold is a method to construct a collection from an initial state z: S , and a function to generate an optional tuple of the next

How to make recursive calls within Behaviors.receive?

纵饮孤独 提交于 2021-01-28 04:12:50
问题 This code is from akka documentation. It impelements an actor using the recommended functional style: import akka.actor.typed.Behavior import akka.actor.typed.scaladsl.ActorContext import akka.actor.typed.scaladsl.Behaviors object Counter { sealed trait Command case object Increment extends Command final case class GetValue(replyTo: ActorRef[Value]) extends Command final case class Value(n: Int) def apply(): Behavior[Command] = counter(0) private def counter(n: Int): Behavior[Command] =

Type aliases screw up type tags?

∥☆過路亽.° 提交于 2021-01-28 03:52:38
问题 Why don't type tags work with type aliases. E.g. given trait Foo object Bar { def apply[A](implicit tpe: reflect.runtime.universe.TypeTag[A]): Bar[A] = ??? } trait Bar[A] I would like to use an alias within the following method, because I need to type A around two dozen times: def test { type A = Foo implicit val fooTpe = reflect.runtime.universe.typeOf[A] // no funciona Bar[A] // no funciona } Next try: def test { type A = Foo implicit val fooTpe = reflect.runtime.universe.typeOf[Foo] // ok

DataFrame using UDF giving Task not serializable Exception

大憨熊 提交于 2021-01-28 03:40:03
问题 Trying to use the show() method on a dataframe. It is giving Task not serializable Exception. I have tried to extend the Serializable object but still the error persists. object App extends Serializable{ def main(args: Array[String]): Unit = { Logger.getLogger("org.apache").setLevel(Level.WARN); val spark = SparkSession.builder() .appName("LearningSpark") .master("local[*]") .getOrCreate() val sc = spark.sparkContext val inputPath = "./src/resources/2015-03-01-0.json" val ghLog = spark.read

Does Dotty support refinements?

狂风中的少年 提交于 2021-01-28 03:13:30
问题 I am reading in dread what will come with Scala 3, paying particular attention to changes to compound types. They were always somewhat of a hack, so clean, true intersection types are certainly an improvement. I couldn't find though anything about what happens to the actual refinement part of the compound type. I rely heavily in my current project on strongly interwoven types in an attempt to have every returned value be as narrow as possible. So, for example, having trait Thing { thisThing =

Mongo Scala Play - java.lang.NoSuchMethodError: com.mongodb.ConnectionString.getApplicationName()Ljava/lang/String;]

可紊 提交于 2021-01-28 02:12:44
问题 I'm trying to do a basic CRUD on a scala play mongo prototype. The code works as a standalone main method - but when executed as a play application invoked through a controller, getting runtime exceptions [debug] Running task... Cancel: Null, check cycles: false, forcegc: true [info] play.api.Play - Application started (Dev) [error] application - ! @7b9n058gm - Internal server error, for (GET) [/mongoTestUserCollection] -> play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[

Confusing scala syntax

只愿长相守 提交于 2021-01-28 02:10:42
问题 trying to understand some scala syntax, and where to find their spec. Bellow i am confused about statefulMapConcat. The signature is this one: def statefulMapConcat[T](f: () => Out => immutable.Iterable[T]): Repr[T] and we have "be able to restart" in { Source(List(2, 1, 3, 4, 1)) .statefulMapConcat(() => { var prev: Option[Int] = None x => { if (x % 3 == 0) throw ex prev match { case Some(e) => prev = Some(x) (1 to e).map(_ => x) case None => prev = Some(x) List.empty[Int] } } })