scala

Curried function in scala

[亡魂溺海] 提交于 2021-02-07 07:12:17
问题 I have a definition of next methods: def add1(x: Int, y: Int) = x + y def add2(x: Int)(y: Int) = x + y the second one is curried version of first one. Then if I want to partially apply second function I have to write val res2 = add2(2) _ . Everything is fine. Next I want add1 function to be curried. I write val curriedAdd = (add1 _).curried Am I right that curriedAdd is similiar to add2 ? But when I try to partially apply curriedAdd in a such way val resCurried = curriedAdd(4) _ I get a

Curried function in scala

|▌冷眼眸甩不掉的悲伤 提交于 2021-02-07 07:11:38
问题 I have a definition of next methods: def add1(x: Int, y: Int) = x + y def add2(x: Int)(y: Int) = x + y the second one is curried version of first one. Then if I want to partially apply second function I have to write val res2 = add2(2) _ . Everything is fine. Next I want add1 function to be curried. I write val curriedAdd = (add1 _).curried Am I right that curriedAdd is similiar to add2 ? But when I try to partially apply curriedAdd in a such way val resCurried = curriedAdd(4) _ I get a

Curried function in scala

南楼画角 提交于 2021-02-07 07:09:22
问题 I have a definition of next methods: def add1(x: Int, y: Int) = x + y def add2(x: Int)(y: Int) = x + y the second one is curried version of first one. Then if I want to partially apply second function I have to write val res2 = add2(2) _ . Everything is fine. Next I want add1 function to be curried. I write val curriedAdd = (add1 _).curried Am I right that curriedAdd is similiar to add2 ? But when I try to partially apply curriedAdd in a such way val resCurried = curriedAdd(4) _ I get a

How to convert map to dataframe?

冷暖自知 提交于 2021-02-07 06:52:43
问题 m is a map as following: scala> m res119: scala.collection.mutable.Map[Any,Any] = Map(A-> 0.11164610291904906, B-> 0.11856755943424617, C -> 0.1023171832681312) I want to get: name score A 0.11164610291904906 B 0.11856755943424617 C 0.1023171832681312 How to get the final dataframe? 回答1: First covert it to a Seq , then you can use the toDF() function. val spark = SparkSession.builder.getOrCreate() import spark.implicits._ val m = Map("A"-> 0.11164610291904906, "B"-> 0.11856755943424617, "C" -

How to get creation date of a file using Scala

我只是一个虾纸丫 提交于 2021-02-07 06:49:13
问题 One of the requirements in my project needs to check if the file's creation date and determine if it is older than 2 days from current day. In Java, there is something like below code which can get us the file's creation date and other information. Path file = ...; BasicFileAttributes attr = Files.readAttributes(file, BasicFileAttributes.class); System.out.println("creationTime: " + attr.creationTime()); System.out.println("lastAccessTime: " + attr.lastAccessTime()); System.out.println(

Request timeout in Gatling

不打扰是莪最后的温柔 提交于 2021-02-07 06:45:06
问题 I am using maven to run my Gatling (Scala) performance test. It gives me request timeout issue when I increase user from 100 to 150. If I set the number of user to 300, then I get following error in simulation log. // Gatling scenario injection val scn = scenario("UATEnvironmentTest") .exec(http("AdminLoginRequest") .post("/authorization_microservice/oauth/token") .headers(headers_1).body(RawFileBody("Login.txt")) .check(jsonPath("$.access_token") .saveAs("auth_token"))) .pause(2) setUp(scn

Kadane's Algorithm in Scala

坚强是说给别人听的谎言 提交于 2021-02-07 05:35:13
问题 Does anyone have a Scala implementation of Kadane's algorithm done in a functional style? Edit Note: The definition on the link has changed in a way that invalidated answers to this question -- which goes to show why questions (and answers) should be self-contained instead of relying on external links. Here's the original definition: In computer science, the maximum subarray problem is the task of finding the contiguous subarray within a one-dimensional array of numbers (containing at least

How to define and use custom annotations in Scala

倖福魔咒の 提交于 2021-02-07 05:30:15
问题 I am trying use a custom annotation in Scala. In this example, I create a string that I want to annotate with metadata (in this case, another string). Then, given an instance of the data, and I want to read the annotation. scala> case class named(name: String) extends scala.annotation.StaticAnnotation defined class named scala> @named("Greeting") val v = "Hello" v: String = Hello scala> def valueToName(x: String): String = ??? valueToName: (x: String)String scala> valueToName(v) // returns

Scala groupBy for a list

假如想象 提交于 2021-02-07 05:29:18
问题 I'd like to create a map on which the key is the string and the value is the number of how many times the string appears on the list. I tried the groupBy method, but have been unsuccessful with that. 回答1: Required Answer scala> val l = List("abc","abc","cbe","cab") l: List[String] = List(abc, abc, cbe, cab) scala> l.groupBy(identity).mapValues(_.size) res91: scala.collection.immutable.Map[String,Int] = Map(cab -> 1, abc -> 2, cbe -> 1) 回答2: Suppose you have a list as scala> val list = List(

SLICK 3.0 - multiple queries depending on one another - db.run(action)

守給你的承諾、 提交于 2021-02-07 05:29:05
问题 I am new to Slick 3 and so far I have understood that db.run are asynchronous call. the .map or .flatMap is run once the Future is returned. The problem in my code below is that all the sub queries do not work (nested db.run). Conceptually speaking, what am I not getting? Is it valid to do this kind of code as below? basically in the .map of the first query I do some actions depending on the first query. I see everywhere for loops with yield , is it the only way to go? Is the problem in my