akka-stream

How to run Akka Streams graph on a separate dispatcher with timeout?

做~自己de王妃 提交于 2019-12-07 11:58:48
问题 This question is based on a pet project that I did and this SO thread. Inside a Akka HTTP route definition, I start a long-running process, and naturally I want to do that without blocking the user. I'm able to achieve this with the code snippet below: blocking-io-dispatcher { type = Dispatcher executor = "thread-pool-executor" thread-pool-executor { fixed-pool-size = 16 } throughput = 1 } complete { Try(new URL(url)) match { case scala.util.Success(u) => { val src = Source.fromIterator(() =>

How to clean up akka-http websocket resources following disconnection and then retry?

╄→гoц情女王★ 提交于 2019-12-07 10:47:34
问题 The code below successfully establishes a websocket connection. The websockets server (also akk-http) deliberately closes the connection using Andrew's suggested answer here. The SinkActor below receives a message of type akka.actor.Status.Failure so I know that the flow of messages from Server to Client has been disrupted. My question is ... How should my client reestablish the websocket connection? Has source.via(webSocketFlow).to(sink).run() completed? What is best practice for cleaning up

Akka-http process HttpRequests from different connections in one flow

南笙酒味 提交于 2019-12-07 10:08:46
问题 Akka-http documentation says: Apart from regarding a socket bound on the server-side as a Source[IncomingConnection] and each connection as a Source[HttpRequest] with a Sink[HttpResponse] Assume we get the merged source containing incoming connections from many Source[IncomingConnection]. Then, assume, we get Source[HttpRequest] from Source[IncomingConnection] (see the code below). Then, no problem, we can provide a flow to convert HttpRequest to HttpResponse. And here is the problem - how

How to abruptly stop an akka stream Runnable Graph?

你说的曾经没有我的故事 提交于 2019-12-07 07:11:29
问题 I am not able to figure out how to stop akka stream Runnable Graph immediately ? How to use killswitch to achieve this? It has been just a few days that I started akka streams. In my case I am reading lines from a file and doing some operations in flow and writing to the sink. What I want to do is, stop reading file immediately whenever I want, and I hope this should possibly stop the whole running graph. Any ideas on this would be greatly appreciated. Thanks in advance. 回答1: Since Akka

akka stream consume web socket

笑着哭i 提交于 2019-12-07 03:27:08
问题 Getting started with akka-streams I want to build a simple example. In chrome using a web socket plugin I simply can connect to a stream like this one https://blockchain.info/api/api_websocket via wss://ws.blockchain.info/inv and sending 2 commands {"op":"ping"} {"op":"unconfirmed_sub"} will stream the results in chromes web socket plugin window. I tried to implement the same functionality in akka streams but am facing some problems: 2 commands are executed, but I actually do not get the

Can the subflows of groupBy depend on the keys they were generated from ?

青春壹個敷衍的年華 提交于 2019-12-07 03:23:51
问题 I have a flow with data associated to users. I also have a state for each user, that I can get asynchronously from DB. I want to separate my flow with one subflow per user, and load the state for each user when materializing the subflow, so that the elements of the subflow can be treated with respect to this state. If I don't want to merge the subflows downstream, I can do something with groupBy and Sink.lazyInit : def getState(userId: UserId): Future[UserState] = ... def getUserId(element:

How to pass results from one source stream to another

半世苍凉 提交于 2019-12-06 22:04:33
I have a method that processes a Source and returns. I am trying to modify it but can't seem to be able to return the same thing: Original def originalMethod[as: AS, mat: MAT, ec: EC](checkType: String) : Flow[ByteString, MyValidation[MyClass], NotUsed]{ collectStuff .map { ts => val errors = MyEngine.checkAll(ts.code) (ts, errors) } .map { x => x._2 .leftMap(xs => { addInformation(x._1, xs.toList) }) .toEither } } I am modifying by using another source and pass result of that to the original source and yet return the same thing: def calculate[T: AS: MAT](source: Source[T, NotUsed]): Future

akka-stream + akka-http lifecycle

我们两清 提交于 2019-12-06 20:47:44
问题 TLDR: is it better to materialize a stream per request (i.e. use short-lived streams) or to use a single stream materialization across requests, when I have an outgoing http request as a part of the stream? Details: I have a typical service that takes an HTTP request, scatters it to several 3rd party downstream services (not controlled by me) and aggregates the results before sending them back. I'm using akka-http for client implementation and spray for server (legacy, will move to akka-http

Waiting for a client websocket flow to connect before connecting source and sink

老子叫甜甜 提交于 2019-12-06 16:03:32
I'm using akka-streams to set up a client web socket. I'm trying to encapsulate the setup in a method with the following signature: def createConnectedWebSocket(url: String): Flow[Message, Message, _] It is clear how to create the web socket flow but it is not connected yet: val webSocketFlow: Flow[Message, Message, Future[WebSocketUpgradeResponse]] = Http().webSocketClientFlow(WebSocketRequest(url)) I first want to Await the upgrade response future and then return the socket flow. However, in order to get the future, I have to materialize the flow and for that I have to connect a Source and a

How to do akka-http request-level backpressure?

岁酱吖の 提交于 2019-12-06 12:09:48
In akka-http, you can: Set akka.http.server.max-connections, which will prevent more than that number of connections. Exceeding this limit means clients will get connection timeouts. Set akka.http.server.pipelining-limit, which prevents a given connection from having more than this number of requests outstanding at once. Exceeding this means clients will get socket timeouts. These are both forms of backpressure from the http server to the client, but both are very low level, and only indirectly related to your server's performance. What seems better would be to backpressure at the http level,