scala

How to parse the JSON data using Spark-Scala

偶尔善良 提交于 2021-01-29 12:50:33
问题 I've requirement to parse the JSON data as shown in the expected results below, currently i'm not getting how to include the signals name(ABS, ADA, ADW) in Signal column. Any help would be much appreciated. I tried something which gives the results as shown below, but i will need to include all the signals in SIGNAL column as well which is shown in the expected results. jsonDF.select(explode($"ABS") as "element").withColumn("stime", col("element.E")).withColumn("can_value", col("element.V"))

Scala compiler expand types

时光毁灭记忆、已成空白 提交于 2021-01-29 12:23:07
问题 Consider this code: trait TypeOr[E, F] { type T } implicit def noneq2[E, F](implicit ev: E =!= F): TypeOr[E, F] = new TypeOr[E, F] { type T = (E, F) } sealed trait Error[+E, +A] case class Err[E, A](e: Error[E, A]) { def combine[B, F](f: A => Error[F, B])(implicit ev: TypeOr[E, F]): Error[ev.T, B] = ??? } val result = Err(null.asInstanceOf[Error[Int, Int]]).combine(_ => null.asInstanceOf[Error[String, String]]) So far so good. From the definitions above, I concluded, that the expanded type of

Is there any direct way to copy one s3 directory to another in java or scala?

时间秒杀一切 提交于 2021-01-29 11:32:33
问题 I want to archive all the files and sub directories in a s3 directory to some other s3 location using java. Is there any direct way to copy one s3 directory to another in java or scala? 回答1: There is no API call to operate on whole directories in Amazon S3. In fact, directories/folders do not exist in Amazon S3. Rather, each object stores the full path in its filename ( Key ). If you wish to copy multiple objects that have the same prefix in their Key, your code will need to loop through the

How to have multiple traits of same base trait implement the same method

自闭症网瘾萝莉.ら 提交于 2021-01-29 11:21:19
问题 I have a scenario with multiple traits inheriting from a fixed base trait. The base trait has an abstract method that each needs to implement. The class using these traits also needs to implement this method: trait A { def f: Any } trait B1 extends A { val i: Int override def f: Any = println("B1: " + i) } trait B2 extends A { val j: Int override def f: Any = println("B2: " + j) } class C extends A with B1 with B2 { val i = 1 val j = 2 override def f: Any = { super.f println("C::f") } } Then

Implementing a typeclass using type parameters versus abstract types

假如想象 提交于 2021-01-29 10:51:22
问题 Following on from Witness that an abstract type implements a typeclass I've tried to compare these two approaches side-by-side in the code snippet below: // We want both ParamaterizedTC and WithAbstractTC (below) to check that // their B parameter implements AddQuotes abstract class AddQuotes[A] { def inQuotes(self: A): String = s"${self.toString}" } implicit val intAddQuotes = new AddQuotes[Int] {} abstract class ParamaterizedTC[A, _B](implicit ev: AddQuotes[_B]) { type B = _B def getB(self:

How to select the N highest values for each category in spark scala

感情迁移 提交于 2021-01-29 10:18:56
问题 Say I have this dataset: val main_df = Seq(("yankees-mets",8,20),("yankees-redsox",4,14),("yankees-mets",6,17), ("yankees-redsox",2,10),("yankees-mets",5,17),("yankees-redsox",5,10)).toDF("teams","homeruns","hits") which looks like this: I want to pivot on the teams' columns, and for all the other columns return the 2 (or N) highest values for that column. So for yankees-mets and homeruns, it would return this, Since the 2 highest homerun totals for them were 8 and 6. How would I do this in

How to take advantage of mkNumericOps in Scala?

不打扰是莪最后的温柔 提交于 2021-01-29 09:51:50
问题 I'm trying to define a new type that can behave essentially like a number (for concreteness, let's say a Double ). I'd like to overload operators on this type and I could do this explicitly, but to avoid repetition, I would like to advantage of the methods in NumericOps, which are defined in terms of the abstract methods in Numeric. My understanding is I should be able to just override the methods in Numeric and get the others for free. Here's the simplest attempt I can think of: class

Type erasure problem in method overloading

走远了吗. 提交于 2021-01-29 09:04:27
问题 I have two overloaded method having following signatures - def fun(x: Seq[String]): Future[Seq[Int]] = ??? def fun(x: Seq[(String, String)]): Future[Seq[Int]] = ??? Due to type erasure, these methods can't be overloaded and hence showing compilation error. I tried using typetags as a workaround - def fun[t: TypeTag](values: Seq[T]): Future[Seq[Int]] = { typeOf[T] match { case t if t =:= typeOf[String] => ??? case t if t =:= typeOf[(String, String)] => ??? case _ => ??? // Should not come here

Scala — Conditional replace column value of a data frame

蹲街弑〆低调 提交于 2021-01-29 08:43:08
问题 DataFrame 1 is what I have now, and I want to write a Scala function to make DataFrame 1 look like DataFrame 2. Transfer is the big category; e-transfer and IMT are subcategories. The Logic is that for a same ID (31898), if both Transfer and e-Transfer tagged to it, it should only be e-Transfer; if Transfer and IMT and e-Transfer all tagged to a same ID (32614), it should be e-Transfer + IMT; If only Transfer tagged to one ID (33987), it should be Other; if only e-Transfer or IMT tagged to a

Cannot breakout of foreach loop in scala

依然范特西╮ 提交于 2021-01-29 08:22:42
问题 i have this below code > .foreach("${plist}", "newshole") { > exec( > http("get the user id") > .get("${newshole}/jcr:content.1.json") > .headers(headers_2) > .check(bodyString.saveAs("Res1")) > ) > exec(session => { > var mynewshole = session("Res1").as[String] > if (!mynewshole.contains("testingInProgress")) { > println("Doesn't contain: " + mynewshole) > (http("post the user id") > .post("${newshole}/jcr:content") > .headers(headers_2) > .formParam("testingInProgress", session.userId)) >