project-reactor

Project Reactor: How to control Flux emission

眉间皱痕 提交于 2021-02-07 10:19:45
问题 I have a flux that emits some Date . This Date is mapped to 1024 simulated HTTP requests that I'm running on some Executer . What I'd like to do is waiting for all the 1024 HTTP requests before emitting the next Date . Currently when running, onNext() is called for many times and then it is stabilised on some steady rate. How can I change this behaviour? P.S. I'm willing to change to architecture, if needed. private void run() throws Exception { Executor executor = Executors

Project Reactor: How to control Flux emission

こ雲淡風輕ζ 提交于 2021-02-07 10:19:30
问题 I have a flux that emits some Date . This Date is mapped to 1024 simulated HTTP requests that I'm running on some Executer . What I'd like to do is waiting for all the 1024 HTTP requests before emitting the next Date . Currently when running, onNext() is called for many times and then it is stabilised on some steady rate. How can I change this behaviour? P.S. I'm willing to change to architecture, if needed. private void run() throws Exception { Executor executor = Executors

Reactor Schedulers keep running long after main thread is done?How to deal with this?

痴心易碎 提交于 2021-02-07 10:17:29
问题 I have a question on how to clean up the Scheduler worker threads while using Reactor 3 Flux.range(1, 10000) .publishOn(Schedulers.newElastic("Y")) .doOnComplete(() -> { // WHAT should one do to ensure the worker threads are cleaned up logger.info("Shut down all Scheduler worker threads"); }) .subscribe(x -> logger.debug(x+ "**")); What I see when I execute the above code is that once the main thread has finished running the worker thread(s) is/are still in WAITING state for some time. sun

Reactor Schedulers keep running long after main thread is done?How to deal with this?

不想你离开。 提交于 2021-02-07 10:15:25
问题 I have a question on how to clean up the Scheduler worker threads while using Reactor 3 Flux.range(1, 10000) .publishOn(Schedulers.newElastic("Y")) .doOnComplete(() -> { // WHAT should one do to ensure the worker threads are cleaned up logger.info("Shut down all Scheduler worker threads"); }) .subscribe(x -> logger.debug(x+ "**")); What I see when I execute the above code is that once the main thread has finished running the worker thread(s) is/are still in WAITING state for some time. sun

Create entity from 3 different mono

泄露秘密 提交于 2021-02-05 06:59:06
问题 I am new in reactive programming. I saw that two monos could be zipped to generate a result: Mono<Info> info = Mono.just(id).map(this::getInfo).subscribeOn(Schedulers.parallel()); Mono<List<Detail>> detail= Mono.just(petitionRequest).map(this.service::getDetails) .subscribeOn(Schedulers.parallel()); Flux<Generated> flux = Flux.zip(detail, info, (p, c) -> { Generated o = Generated.builder().info(c).detail(p).build(); return o; }); As I have understood this paralelizes the two call and generate

Combine two Stream into one Flux

时光毁灭记忆、已成空白 提交于 2021-01-29 14:12:45
问题 How can I combine two streams Stream<String> into Flux ? What I understand is that I might need to use Flux create method to create this but I am not really sure about it: flux1.create(sink -> { sink.onRequest(L -> { for(long l = 0; l < L; l++) { sink.next(..); } }); }) Please help. 回答1: Concat the Stream s into one and then invoke Flux#fromStream : Flux<String> flux = Flux.fromStream(Stream.concat(stream1, stream2)); Another way of doing this would be to create a Flux using Flux#fromStream

Continue consuming subsequent records in reactor kafka after deserialization exception

半腔热情 提交于 2021-01-28 19:50:12
问题 I am using reactor kafka and have a custom AvroDeserializer class for deserialization of messages. Now I have a case where for certain payloads the deserialization class throws an exception. My Kafka listener dies as soon as it tries to read such records. I tried handling this exception using onErrorReturn and using combination of ( doOnError and onErrorContinue ), however, it helped log the exception, but failed to consume subsequent records. public class AvroDeserializer<T extends

Stream response from HTTP client with Spring/Project reactor

主宰稳场 提交于 2021-01-28 12:20:08
问题 How to stream response from reactive HTTP client to the controller without having the whole response body in the application memory at any time? Practically all examples of project reactor client return Mono<T> . As far as I understand reactive streams are about streaming, not loading it all and then sending the response. Is it possible to return kind of Flux<Byte> to make it possible to transfer big files from some external service to the application client without a need of using a huge

Automatic rate adjustment in Reactor

你说的曾经没有我的故事 提交于 2021-01-28 03:24:20
问题 TL;DR; Is there a way to automatically adjust delay between elements in Project Reactor based on downstream health? More details I have an application that reads records from Kafka topic, sends an HTTP request for each one of them and writes the result to another Kafka topic. Reading and writing from/to Kafka is fast and easy, but the third party HTTP service is easily overwhelmed, so I use delayElements() with a value from a property file, which means that this value does not change during

How can I create reactor Flux from a blocking queue?

百般思念 提交于 2021-01-28 00:30:22
问题 I am trying to implement a reactor Flux created from a BlockingQueue but not sure which operator is best for my use case? I am creating a streaming REST end point, where response is Flux that needs to keep emitting messages from a BlockingQueue as a response to GET REST call. I have already tried forums and documentation and can only find Flux initiated from iterable collections or reactive data sources, but no examples from any BlockingQueue. 回答1: You can try Flux#generate and Queue#peek.