scala

Wrap function implementations returning a specific type into another function programatically

℡╲_俬逩灬. 提交于 2020-12-29 08:11:29
问题 I would like to wrap all the user defined functions in a scala project that return a certain type T , into a function that accepts a T and the function name as parameters. eg. given this function is in scope: def withMetrics[T](functionName: String)(f: => Try[T]): Try[T] = { f match { case _: Success[T] => println(s"send metric: success for $functionName") case _: Failure[T] => println(s"send metric: failure for $functionName") } f } the user can send metrics for their functions which return

Wrap function implementations returning a specific type into another function programatically

℡╲_俬逩灬. 提交于 2020-12-29 08:07:32
问题 I would like to wrap all the user defined functions in a scala project that return a certain type T , into a function that accepts a T and the function name as parameters. eg. given this function is in scope: def withMetrics[T](functionName: String)(f: => Try[T]): Try[T] = { f match { case _: Success[T] => println(s"send metric: success for $functionName") case _: Failure[T] => println(s"send metric: failure for $functionName") } f } the user can send metrics for their functions which return

Scala 3 - Extract Tuple of wrappers and InverseMap on First Order Type

点点圈 提交于 2020-12-29 07:59:59
问题 I am trying to create a function, which takes a tuple of higher-kinded types and applies a function to the types within the higher-kinded types. In the example below, there is a trait Get[A] which is our higher-kinded type. There is also a tuple of Get's: (Get[String],Get[Int]) as well as function from (String,Int) => Person . Scala-3 has a Match-Type called InverseMap which converts the type (Get[String], Get[Int]) into what is essentially the type (String,Int). So the ultimate goal is to

Mixin to wrap every method of a Scala trait

时光总嘲笑我的痴心妄想 提交于 2020-12-29 06:58:07
问题 Suppose I have a trait Foo with several methods. I want to create a new trait which extends Foo but "wraps" each method call, for example with some print statement (in reality this will be something more complicated / I have a couple of distinct use cases in mind). trait Foo { def bar(x: Int) = 2 * x def baz(y: Int) = 3 * y } I can do this manually, by overriding each method. But this seems unnecessarily verbose (and all too easy to call the wrong super method): object FooWrapped extends

mapping over HList inside a function

我的梦境 提交于 2020-12-29 06:35:31
问题 The following code seems obvious enough to compile and run case class Pair(a: String, b: Int) val pairGen = Generic[Pair] object size extends Poly1 { implicit def caseInt = at[Int](x => 1) implicit def caseString = at[String](_.length) } def funrun(p: Pair) = { val hp: HList = pairGen.to(p) hp.map(size) } but the compiler says "could not find implicit value for parameter mapper". In my use case, I want to map over an HList to get and HList of String(s) and then convert the HList of String(s)

mapping over HList inside a function

瘦欲@ 提交于 2020-12-29 06:35:20
问题 The following code seems obvious enough to compile and run case class Pair(a: String, b: Int) val pairGen = Generic[Pair] object size extends Poly1 { implicit def caseInt = at[Int](x => 1) implicit def caseString = at[String](_.length) } def funrun(p: Pair) = { val hp: HList = pairGen.to(p) hp.map(size) } but the compiler says "could not find implicit value for parameter mapper". In my use case, I want to map over an HList to get and HList of String(s) and then convert the HList of String(s)

Spark 2.2.0 - How to write/read DataFrame to DynamoDB

本秂侑毒 提交于 2020-12-29 06:23:53
问题 I want my Spark application to read a table from DynamoDB, do stuff, then write the result in DynamoDB. Read the table into a DataFrame Right now, I can read the table from DynamoDB into Spark as a hadoopRDD and convert it to a DataFrame. However, I had to use a regular expression to extract the value from AttributeValue . Is there a better/more elegant way? Couldn't find anything in the AWS API. package main.scala.util import org.apache.spark.sql.SparkSession import org.apache.spark

Spark 2.2.0 - How to write/read DataFrame to DynamoDB

こ雲淡風輕ζ 提交于 2020-12-29 06:23:51
问题 I want my Spark application to read a table from DynamoDB, do stuff, then write the result in DynamoDB. Read the table into a DataFrame Right now, I can read the table from DynamoDB into Spark as a hadoopRDD and convert it to a DataFrame. However, I had to use a regular expression to extract the value from AttributeValue . Is there a better/more elegant way? Couldn't find anything in the AWS API. package main.scala.util import org.apache.spark.sql.SparkSession import org.apache.spark

Spark 2.2.0 - How to write/read DataFrame to DynamoDB

百般思念 提交于 2020-12-29 06:22:05
问题 I want my Spark application to read a table from DynamoDB, do stuff, then write the result in DynamoDB. Read the table into a DataFrame Right now, I can read the table from DynamoDB into Spark as a hadoopRDD and convert it to a DataFrame. However, I had to use a regular expression to extract the value from AttributeValue . Is there a better/more elegant way? Couldn't find anything in the AWS API. package main.scala.util import org.apache.spark.sql.SparkSession import org.apache.spark

Spark saveAsTextFile() writes to multiple files instead of one [duplicate]

馋奶兔 提交于 2020-12-29 04:27:24
问题 This question already has answers here : how to make saveAsTextFile NOT split output into multiple file? (9 answers) Closed 4 years ago . I am using Spark and Scala on my laptop at this moment. When I write an RDD to a file, the output is written to two files "part-00000" and "part-00001". How can I force Spark / Scala to write to one file ? My code is currently: myRDD.map(x => x._1 + "," + x._2).saveAsTextFile("/path/to/output") where I am removing the parentheses to write out key,value