scala

How do I compute an aggregation inside a GraphStage in Akka Streams?

女生的网名这么多〃 提交于 2021-01-07 01:42:37
问题 I have an operator/component in Akka stream that aims to compute a value within a window of 5 seconds. So, I created my operator/component using TimerGraphStageLogic which you can see on the code below. In order to test it I created 2 sources, one that increments and the other that decrements, then I merge them using the Merge shape, then I use my windowFlowShape , and finally emit them in a Sink shape. I ensure that the TimerGraphStageLogic is working because I tested it in another PoC. In

why is my code not returning anything ? Scala fs2

99封情书 提交于 2021-01-07 01:28:15
问题 The program permits pushing Mapping Ints to Double and identifying the exit time from the queue. The program is not showing any error but It is not printing anything. What am I missing ? import cats.effect.{ExitCode, IO, IOApp, Timer} import fs2._ import fs2.concurrent.Queue import scala.concurrent.duration._ import scala.util.Random class Tst(q1: Queue[IO, (Double, IO[Long])])(implicit timer: Timer[IO]) { val streamData = Stream.emit(1) val scheduledStream = Stream.fixedDelay[IO](10.seconds)

why is my code not returning anything ? Scala fs2

末鹿安然 提交于 2021-01-07 01:26:49
问题 The program permits pushing Mapping Ints to Double and identifying the exit time from the queue. The program is not showing any error but It is not printing anything. What am I missing ? import cats.effect.{ExitCode, IO, IOApp, Timer} import fs2._ import fs2.concurrent.Queue import scala.concurrent.duration._ import scala.util.Random class Tst(q1: Queue[IO, (Double, IO[Long])])(implicit timer: Timer[IO]) { val streamData = Stream.emit(1) val scheduledStream = Stream.fixedDelay[IO](10.seconds)

001、Scala开发环境搭建和HelloWorld解析

跟風遠走 提交于 2021-01-06 10:55:23
直接上手吧。 1、scala是运行在jvm上的语言,所以,需要安装jdk 对应的相关安装手册文档整理如下: windows: http://blog.csdn.net/wu_huiwen/article/details/5703943 ubuntu: http://www.cnblogs.com/plinx/archive/2013/06/01/3113106.html 请注意安装的版本,建议学习为主,直接上jdk8 2、下载安装scala http://www.scala-lang.org/download/ 选自己OS对应的版本,pc下载后自己安装,linux mac的自己解压就可以了。 3、下载scala ide,熟悉的eclipse http://scala-ide.org ,这个下载完了就可以用 4、像打开以前的eclipse一样打开,然后新建一个scala的工程,然后再新建一个scala object 代码如下 object HelloScala{ def main (args: Array[String]) { printf("Hello Scala") } } 静静等待世界的呼唤 来源: oschina 链接: https://my.oschina.net/u/1011594/blog/510835

scala api for delta lake optimize command

我只是一个虾纸丫 提交于 2021-01-06 07:38:54
问题 The databricks docs say that you can change zordering of a delta table by doing: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) sql(s"OPTIMIZE $connZorder ZORDER BY (src_ip, src_port, dst_ip, dst_port)") The problem with this is the switching between the scala and SQL api which is gross. What I want to be able to do is: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) .optimize.zorderBy("src_ip", "src_port", "dst_ip", "dst_port") but I

scala api for delta lake optimize command

爷,独闯天下 提交于 2021-01-06 07:34:44
问题 The databricks docs say that you can change zordering of a delta table by doing: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) sql(s"OPTIMIZE $connZorder ZORDER BY (src_ip, src_port, dst_ip, dst_port)") The problem with this is the switching between the scala and SQL api which is gross. What I want to be able to do is: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) .optimize.zorderBy("src_ip", "src_port", "dst_ip", "dst_port") but I

scala api for delta lake optimize command

谁说我不能喝 提交于 2021-01-06 07:34:42
问题 The databricks docs say that you can change zordering of a delta table by doing: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) sql(s"OPTIMIZE $connZorder ZORDER BY (src_ip, src_port, dst_ip, dst_port)") The problem with this is the switching between the scala and SQL api which is gross. What I want to be able to do is: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) .optimize.zorderBy("src_ip", "src_port", "dst_ip", "dst_port") but I

scala api for delta lake optimize command

人盡茶涼 提交于 2021-01-06 07:34:23
问题 The databricks docs say that you can change zordering of a delta table by doing: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) sql(s"OPTIMIZE $connZorder ZORDER BY (src_ip, src_port, dst_ip, dst_port)") The problem with this is the switching between the scala and SQL api which is gross. What I want to be able to do is: spark.read.table(connRandom) .write.format("delta").saveAsTable(connZorder) .optimize.zorderBy("src_ip", "src_port", "dst_ip", "dst_port") but I

Macro expansion contains free variable

白昼怎懂夜的黑 提交于 2021-01-06 07:25:40
问题 My code compiles with the following error: Macro expansion contains free term variable Hello ... I have reduced it to minimal example: class Hello(val hi: String) { val xx = reify(hi) var yy = q"" } def setYYImpl(c: Context)(hExpr: c.Expr[Hello]): c.Expr[Hello] = { import c.universe._ val hello = c.eval(c.Expr[Hello](c.untypecheck(hExpr.tree.duplicate))) val xxVal = c.internal.createImporter(u).importTree(hello.xx.tree) c.Expr(q"""{val h = new Hello("HO"); h.yy=$xxVal; h}""") // it should set

Macro expansion contains free variable

余生颓废 提交于 2021-01-06 07:24:09
问题 My code compiles with the following error: Macro expansion contains free term variable Hello ... I have reduced it to minimal example: class Hello(val hi: String) { val xx = reify(hi) var yy = q"" } def setYYImpl(c: Context)(hExpr: c.Expr[Hello]): c.Expr[Hello] = { import c.universe._ val hello = c.eval(c.Expr[Hello](c.untypecheck(hExpr.tree.duplicate))) val xxVal = c.internal.createImporter(u).importTree(hello.xx.tree) c.Expr(q"""{val h = new Hello("HO"); h.yy=$xxVal; h}""") // it should set