reactive-streams

End-to-End Reactive Streaming RESTful service (a.k.a. Back-Pressure over HTTP)

五迷三道 提交于 2019-12-03 08:36:34
I have been trying to clarify this question online for a while without success, so I will try to ask it here. I would like to find some resource or example where it shows how I can build an end-to-end fully back-pressured REST service + client. What I mean is that I would like to see that, given a REST client that implements Reactive Streams (whether in Akka, JS, or whatever), I will have (and be able to "visualise") the back-pressure handled throughout a REST server built, e.g. with Akka-Http. To be clear, I am searching for something like the following talk (but I could not find slides or

How to use Reactive Streams for NIO binary processing?

流过昼夜 提交于 2019-12-03 03:04:28
Are there some code examples of using org.reactivestreams libraries to process large data streams using Java NIO (for high performance)? I'm aiming at distributed processing, so examples using Akka would be best, but I can figure that out. It still seems to be the case that most (I hope not all) examples of reading files in scala resort to Source (non-binary) or direct Java NIO (and even things like Files.readAllBytes !) Perhaps there is an activator template I've missed? ( Akka Streams with Scala! is close addressing everything I need except the binary/NIO side) Do not use scala.collection

ParallelFlux vs flatMap() for a Blocking I/O task

删除回忆录丶 提交于 2019-11-30 06:50:32
I have a Project Reactor chain which includes a blocking task (a network call, we need to wait for response). I'd like to run multiple blocking tasks concurrently. It seems like either ParallelFlux or flatMap() could be used, bare-bone examples: Flux.just(1) .repeat(10) .parallel(3) .runOn(Schedulers.elastic()) .doOnNext(i -> blockingTask()) .sequential() .subscribe() or Flux.just(1) .repeat(10) .flatMap(i -> Mono.fromCallable(() -> {blockingTask(); return i;}).subscribeOn(Schedulers.elastic()), 3) .subscribe(); What are the merits of the two techniques? Is one to be preferred over the other?

Mono vs Flux in Reactive Stream

六眼飞鱼酱① 提交于 2019-11-30 00:42:40
As per the documentation: Flux is a stream which can emit 0..N elements: Flux<String> fl = Flux.just("a", "b", "c"); Mono is a stream of 0..1 elements: Mono<String> mn = Mono.just("hello"); And as both are the implementations of the Publisher interface in the reactive stream. Can't we use only Flux in most of the cases as it also can emit 0..1, thus satisfying the conditions of a Mono? Or there are some specific conditions when only Mono needs to be used and Flux can not handle the operations? Please suggest. In many cases, you are doing some computation or calling a service and you expect

ParallelFlux vs flatMap() for a Blocking I/O task

若如初见. 提交于 2019-11-29 07:17:12
问题 I have a Project Reactor chain which includes a blocking task (a network call, we need to wait for response). I'd like to run multiple blocking tasks concurrently. It seems like either ParallelFlux or flatMap() could be used, bare-bone examples: Flux.just(1) .repeat(10) .parallel(3) .runOn(Schedulers.elastic()) .doOnNext(i -> blockingTask()) .sequential() .subscribe() or Flux.just(1) .repeat(10) .flatMap(i -> Mono.fromCallable(() -> {blockingTask(); return i;}).subscribeOn(Schedulers.elastic(

How are reactive streams used in Slick for inserting data

吃可爱长大的小学妹 提交于 2019-11-29 04:01:17
In Slick's documentation examples for using Reactive Streams are presented just for reading data as a means of a DatabasePublisher. But what happens when you want to use your database as a Sink and backpreasure based on your insertion rate? I've looked for equivalent DatabaseSubscriber but it doesn't exist. So the question is, if I have a Source, say: val source = Source(0 to 100) how can I crete a Sink with Slick that writes those values into a table with schema: create table NumberTable (value INT) Serial Inserts The easiest way would be to do inserts within a Sink.foreach . Assuming you've

Mono vs Flux in Reactive Stream

那年仲夏 提交于 2019-11-28 19:50:34
问题 As per the documentation: Flux is a stream which can emit 0..N elements: Flux<String> fl = Flux.just("a", "b", "c"); Mono is a stream of 0..1 elements: Mono<String> mn = Mono.just("hello"); And as both are the implementations of the Publisher interface in the reactive stream. Can't we use only Flux in most of the cases as it also can emit 0..1, thus satisfying the conditions of a Mono? Or there are some specific conditions when only Mono needs to be used and Flux can not handle the operations

How to properly call Akka HTTP client for multiple (10k - 100k) requests?

一笑奈何 提交于 2019-11-28 05:32:16
I'm trying to write a tool for batch data upload using Akka HTTP 2.0-M2. But I'm facing akka.stream.OverflowStrategy$Fail$BufferOverflowException: Exceeded configured max-open-requests value of [32] error. I tried to isolate a problem and here is the sample code which also fails: public class TestMaxRequests { private static final class Router extends HttpApp { @Override public Route createRoute() { return route( path("test").route( get(handleWith(ctx -> ctx.complete("OK"))) ) ); } } public static void main(String[] args) { ActorSystem actorSystem = ActorSystem.create(); Materializer

Akka Streams: What does Mat represents in Source[out, Mat]

爱⌒轻易说出口 提交于 2019-11-27 23:36:43
In Akka streams what does Mat in Source[Out, Mat] or Sink[In, Mat] represent. When will it actually be used? The Mat type parameter represents the type of the materialized value of this stream. Remember that in Akka Source , Flow , Sink (well, all graphs) are just blueprints - they do not do any processing by themselves, they only describe how the stream should be constructed. The process of turning these blueprints into a working stream with live data is called materialization . The core method for materializing a stream is called run() , and it is defined in the RunnableGraph class. All

Akka-Stream implementation slower than single threaded implementation

只谈情不闲聊 提交于 2019-11-27 14:28:05
UPDATE FROM 2015-10-30 based on Roland Kuhn Awnser: Akka Streams is using asynchronous message passing between Actors to implement stream processing stages. Passing data across an asynchronous boundary has an overhead that you are seeing here: your computation seems to take only about 160ns (derived from the single-threaded measurement) while the streaming solution takes roughly 1µs per element, which is dominated by the message passing. Another misconception is that saying “stream” implies parallelism: in your code all computation runs sequentially in a single Actor (the map stage), so no