event-driven-design

Name of “notification-and-check” pubsub architecture?

余生颓废 提交于 2021-02-17 03:28:26
问题 Basic pubsub architecture question. At a high level, when designing pubsub, I sometimes face a choice between two architectures: Publish mutations or "new-state". Some DB state is mutated, and publishers notify of that change via pubsub. But they include enough information in the message so that the subscriber doesn't need to do a look-up on the DB. Imagine the subscriber has a cache of the DB. It could receive the mutations or new-state, and update its cache without doing a look-up.

DDD - aggregate root identity usage across bounded context bounderies

谁说胖子不能爱 提交于 2021-02-09 11:52:43
问题 One suggested way to model entity identities in a domain model is to create value objects instead of using primitive types (f.e. in C#): public class CustomerId { public long Id { get; set; } } In my opinion this classes should be used throughout the whole application and not only in the domain model. Together with commands and events they can define a service contract for a bounded context. Now in an message/event driven architecture with multiple bounded contexts and each having a separate

ZeroMQ, can we use inproc: transport along with pub/sub messaging pattern

倾然丶 夕夏残阳落幕 提交于 2021-02-08 15:14:38
问题 Scenario : We were evaluating ZeroMQ (specifically jeroMq ) for an event driven mechanism. The application is distributed where multiple services (both publishers and subscribers are services) can exist either in the same jvm or in distinct nodes, which depends on the deployment architecture. Observation For playing around I created a pub / sub pattern with inproc: as the transport , using jero mq (version :0.3.5) The thread publishing is able to publish (looks like getting published, at

How to make Spring cloud stream Kafka streams binder retry processing a message if a failure occurs during the processing step?

依然范特西╮ 提交于 2020-08-10 02:01:07
问题 I am working on Kafka Streams using Spring Cloud Stream. In the message processing application, there may be a chance that it will produce an error. So the message should not be commited and retried again. My application method - @Bean public Function<KStream<Object, String>, KStream<String, Long>> process() { return (input) -> { KStream<Object, String> kt = input.flatMapValues(v -> Arrays.asList(v.toUpperCase().split("\\W+"))); KGroupedStream<String, String> kgt =kt.map((k, v) -> new

Are polling and event-driven programming different words for the same technique?

雨燕双飞 提交于 2020-07-18 15:45:28
问题 I studied interrupts vs cyclical polling and learnt the advantages of interrupts that don't have to wait for a poll. Polling seemed to me just like event-driven programming or at least similar to a listener and what the polling does is actually much like listening to input or output. Do you agree or did I misunderstand any crucial difference between polling (cyclical listening) and event-driven programming (also listening with so-called listeners)? 回答1: Nope, quite the contrary interrupt

Are polling and event-driven programming different words for the same technique?

被刻印的时光 ゝ 提交于 2020-07-18 15:45:23
问题 I studied interrupts vs cyclical polling and learnt the advantages of interrupts that don't have to wait for a poll. Polling seemed to me just like event-driven programming or at least similar to a listener and what the polling does is actually much like listening to input or output. Do you agree or did I misunderstand any crucial difference between polling (cyclical listening) and event-driven programming (also listening with so-called listeners)? 回答1: Nope, quite the contrary interrupt

Keeping services in sync in a kafka event driven backbone

断了今生、忘了曾经 提交于 2020-06-17 02:31:11
问题 Say I am using Kafka as the event-driven backbone for all my microservices in my system design. Many microservices use the events data to populate their internal databases. Now there is a requirement where I need to create a new service and it uses some events data. The service will only be able to consume events after the time it comes live and hence, won't have a lot of data that it missed. I want a strategy such that I don't have to backfill my internal databases by writing out scripts.

Keeping services in sync in a kafka event driven backbone

社会主义新天地 提交于 2020-06-17 02:30:08
问题 Say I am using Kafka as the event-driven backbone for all my microservices in my system design. Many microservices use the events data to populate their internal databases. Now there is a requirement where I need to create a new service and it uses some events data. The service will only be able to consume events after the time it comes live and hence, won't have a lot of data that it missed. I want a strategy such that I don't have to backfill my internal databases by writing out scripts.

Spring dataflow and GCP Pub Sub

断了今生、忘了曾经 提交于 2020-03-04 16:41:22
问题 I'm building an event-driven microservice architecture, which is supposed to be Cloud agnostic (as much as possible). Since this is initially going in GCP and I don't want to spend a long time in configurations and all that, I was going to use GCP's Pub/Sub directly for the event queue and would take care of other Cloud implementations later, but then I came across Spring Cloud Dataflow, which seemed nice because these are Spring Boot microservices and I needed a way to orchestrate them. Does

How can I create an “event-driven” background thread in Java?

走远了吗. 提交于 2020-01-01 11:48:09
问题 I like the simplicity of invokeLater() for sending units of work to the AWT EDT. It would be nice to have a similar mechanism for sending work requests to a background thread (such as SwingWorker) but as I understand it, these do not have any sort of event queueing & dispatch mechanism, which is what invokeLater() depends upon. So instead, I've ended up giving my background thread a blocking queue, to which other threads send messages, and the thread essentially runs a receive loop, blocking