Microservices client acknowledgement and Event Sourcing

爱⌒轻易说出口 提交于 2019-12-11 08:41:51

问题


Scenario

I am building courier service system using Microservices. I am not sure of few things and here is my Scenario

  1. Booking API - This is where customer Place order
  2. Payment API - This is where we process the payment against booking
  3. Notification API - There service is responsible for sending the notification after everything is completed.

The system is using event-driven Architecture. When customer places booking order , i commit local transaction in booking API and publish event. Payment API and notification API are subscribed to their respective event . Once Done Payment and notification API need to acknowledge back to Booking API.

My Questions is

After publishing the event my booking service can't block the call and goes back to the client (front end). How does my client app will have to check the status of transaction or it would know that transaction is completed? Does it poll every couple of seconds ? Since this is distributed transaction and any service can go down and won't be able to acknowledge back . In that case how do my client (front end) would know since it will keep on waiting. I am considering saga for distributed transactions.

What's the best way to achieve all of this ?

Event Sourcing

I want to implement Event sourcing to track the complete track of the booking order. Does i have to implement this in my booking API with event store ? Or event store are shared between services since i am supposed to catch all the events from different services . What's the best way to implement this ?

Many Thanks,


回答1:


The way I visualize this is as follows (influenced by Martin Kleppmann's talk here and here).

  1. The end user places an order. The order is written to a Kafka topic. Since Kafka has a log structured storage, the order details will be saved in the least possible time. It's an atomic operation ('A' in 'ACID') - all or nothing
  2. Now as soon as the user places the order, the user would like to read it back (read-your-write). To acheive this we can write the order data in a distributed cache as well. Although dual write is not usually a good idea as it may cause partial failure (e.g. writing to Kafka is successful, but writing to cache fails), we can mitigate this risk by ensuring that one of the Kafka consumer writes the data in a database. So, even in a rare scenario of cache failure, the user can read the data back from DB eventually.
  3. The status of the order in the cache as written at the time of order creation is "in progress"
  4. One or more kafka consumer groups are then used to handle the events as follows: the payment and notification are handled properly and the final status will be written back to the cache and database
  5. A separate Kafka consumer will then receive the response from the payment and notification apis and write the updates to cache, DB and a web socket

  6. The websocket will then update the UI model and the changes would be then reflected in the UI through event sourcing.

Further clarifications based on comment

  1. The basic idea here is that we build a cache using streaming for every service with data they need. For e.g. the account service needs feedback from the payment and notification services. Therefore, we have these services write their response to some Kafka topic which has some consumers that write the response back to order service's cache

  2. Based on the ACID properties of Kafka (or any similar technology), the message will never be lost. Eventually we will get all or nothing. That's atomicity. If the order service fails to write the order, an error response is sent back to the client in a synchronous way and the user probably retries after some time. If the order service is successful, the response to the other services must flow back to its cache eventually. If one of the services is down for some time, the response will be delayed, but it will be sent eventually when the service resumes

  3. The clients need not poll. The result will be propagated to it through streaming using websocket. The UI page will listen to the websocket As the consumer writes the feedback in the cache, it can also write to the websocket. This will notify the UI. Then if you use something like Angular or ReactJS, the appropriate section of the UI can be refreshed with the value received at the websocket. Until that happens user keeps seeing the status "in progress" as was written to the cache at the time of order creation Even if the user refreshes the page, the same status is retrieved from the cache. If the cache value expires and follows a LRU mechanism, the same value will be fetched from the DB and wriitten back to the cache to serve future requests. Once the feedback from the other services are available, the new result will be streamed using websocket. On page refresh, new status would be available from the cache or DB




回答2:


You can pass an Identifier back to client once the booking is completed and client can use this identifier to query the status of the subsequent actions if you can connect them on the back end. You can also send a notification back to the Client when other events are completed. You can do long polling or you can do notification.

thanks skjagini. part of my question is to handle a case where other microservices don't get back in time or never. lets say payment api is done working and charged the client but didn't notify my order service in time or after very long time. how my client waits ? if we timeout the client the backend may have processed it after timeout

In CQRS, you would separate the Commands and Querying. i.e, considering your scenario you can implement all interactions with Queues for interaction. (There are multiple implementations for CQRS with event sourcing, but in simplest form):

Client Sends a request --> Payment API receives the request --> Validates the request (if validation fails throws error back to the user) --> On successful validation --> generates a GUID and writes the message request to Queue --> passes the GUID to the user

Payment API subscribes the payment queue --> After processing the request --> writes to Order queue or any other queues

Order APi subscribes to Order Queue and processes the request.

User has a GUID which can get him data for all the interactions.

If use a pub/sub as in Kafka instead of Kafka (all other subsequent systems can read from the same topic, you don't need to write for each queue)

If any of the services fail to process, once the services are restarted they should be able to pick where they left off, if the services are down in the middle of a transaction as long as they roll back their resp changes you system should be stable condition




回答3:


I'm not 100% sure what you are asking. But it sounds like you should be using a messaging service. As @Saptarshi Basu mentioned kafka is good. I would really recommend NATS - although I'm biased because that's the one I work with

With NATS you can create request-reply messages to interface between client and booking service. That's a 1-1 communication

If you have multiple instances of each of your services running, you can use the Queuing service to automatically load balance. NATS will just randomly select a server for you

And then you can use pub-sub feeds for communication between all of your services.

This will give you a very resilient and scalable architecture, and NATS makes it all incredibly easy



来源:https://stackoverflow.com/questions/54451013/microservices-client-acknowledgement-and-event-sourcing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!