scala

Play + Anorm + Postgres - load json value into a case class

不问归期 提交于 2021-02-11 12:35:58
问题 I am using anorm to query and save elements into my postgres database. I have a json column which I want to read as class of my own. So for example if I have the following class case class Table(id: Long, name:String, myJsonColumn:Option[MyClass]) case class MyClass(site: Option[String], user:Option[String]) I am trying to write the following update: DB.withConnection { implicit conn => val updated = SQL( """UPDATE employee |SET name = {name}, my_json_column = {myClass} |WHERE id = {id} """

Calling Scala's method from Java with Scala's collection type

纵然是瞬间 提交于 2021-02-11 12:33:49
问题 I have a Java code calling Scala method. Java side code: List<String> contexts = Arrays.asList(initialContext); ContextMessage c = ContextMessage.load(contexts); Scala side code: def load(contexts: List[String]) = ... contexts foreach context => In this case, I have scala.collection.immutable.List<String> cannot be applied ... error message. I also need to make the type of contexts as general as possible (i.e., Seq) as the load method iterates over the given collection object to process

Can shapeless Record type be used as a Poly1?

纵饮孤独 提交于 2021-02-11 12:22:03
问题 Assuming if I have the following Record typed data, and a hlist of keys: val rr = ("a" ->> 1) :: ("b" -> "s") :: ("c" -> 3) :: HNil val hh = "c" :: "b" :: HNil And I want to extract values in rr for each key in hh , then combine them into a type level object, eventually yielding: (3: Int) :: ("s": String) :: HNil How this can be achieved with least amount of code? I could obviously write a inductively-summoned implicit function but it seems to be overkill 回答1: Firstly, you have typos. ->>

Can shapeless Record type be used as a Poly1?

限于喜欢 提交于 2021-02-11 12:21:58
问题 Assuming if I have the following Record typed data, and a hlist of keys: val rr = ("a" ->> 1) :: ("b" -> "s") :: ("c" -> 3) :: HNil val hh = "c" :: "b" :: HNil And I want to extract values in rr for each key in hh , then combine them into a type level object, eventually yielding: (3: Int) :: ("s": String) :: HNil How this can be achieved with least amount of code? I could obviously write a inductively-summoned implicit function but it seems to be overkill 回答1: Firstly, you have typos. ->>

play json in scala: deserializing json with unknown fields without losing them

ⅰ亾dé卋堺 提交于 2021-02-11 08:35:33
问题 consider i have a json as following: { "a": "aa", "b": "bb", "c": "cc", "d": "dd", // unknown in advance "e": { //unknown in advance "aa": "aa" } } i know for sure that the json will contain a,b,c but i've no idea what other fields this json may contain. i want to serialize this JSON into a case class containing a,b,c but on the other hand not to lose the other fields (save them in a map so the class will be deserialized to the same json as received). ideas? 回答1: One option is to capture the

How to load data in weka Instances from a spark dataframe

那年仲夏 提交于 2021-02-11 08:27:20
问题 I have a spark DataFrame. Now I want to do some processing using Weka. Therefore, I want to load data into Weka Instances from the DataFrame and finally return the data as a DataFrame. As the structure both the data type is different, I wondering can anybody help me with the conversion. The code snippet may look like below. val df: DataFrame = data val data: Instances = process(df) 来源: https://stackoverflow.com/questions/58160584/how-to-load-data-in-weka-instances-from-a-spark-dataframe

Spark-shell : The number of columns doesn't match

岁酱吖の 提交于 2021-02-11 07:44:22
问题 I have csv format file and is separated by delimiter pipe "|". And the dataset has 2 column, like below . Column1|Column2 1|Name_a 2|Name_b But sometimes we receive only one column value and other is missing like below Column1|Column2 1|Name_a 2|Name_b 3 4 5|Name_c 6 7|Name_f So any row having mismatched column no is a garbage value for us for the above example it will be rows having column value as 3, 4, and 6 and we want to discard these rows. Is there any direct way I can discard those

Spark-shell : The number of columns doesn't match

痴心易碎 提交于 2021-02-11 07:44:11
问题 I have csv format file and is separated by delimiter pipe "|". And the dataset has 2 column, like below . Column1|Column2 1|Name_a 2|Name_b But sometimes we receive only one column value and other is missing like below Column1|Column2 1|Name_a 2|Name_b 3 4 5|Name_c 6 7|Name_f So any row having mismatched column no is a garbage value for us for the above example it will be rows having column value as 3, 4, and 6 and we want to discard these rows. Is there any direct way I can discard those

ScalaZ3 installation issue

牧云@^-^@ 提交于 2021-02-11 07:01:47
问题 I am trying to install ScalaZ3 by using the site https://github.com/epfl-lara/ScalaZ3 . I cloned the code into my directory on Windows, opened a Linux terminal and ran sbt +package . At first it returned several errors since I am using Ubuntu Linux subsystem on Windows, so I had to comment out some lines in mk_util.py to make it work. I now encounter another problem where it is trying to run a "make" file but it can't find it. It has the correct directory path and the file exists in that

ScalaZ3 installation issue

我怕爱的太早我们不能终老 提交于 2021-02-11 07:01:32
问题 I am trying to install ScalaZ3 by using the site https://github.com/epfl-lara/ScalaZ3 . I cloned the code into my directory on Windows, opened a Linux terminal and ran sbt +package . At first it returned several errors since I am using Ubuntu Linux subsystem on Windows, so I had to comment out some lines in mk_util.py to make it work. I now encounter another problem where it is trying to run a "make" file but it can't find it. It has the correct directory path and the file exists in that