Event sourcing with Kafka streams

妖精的绣舞 提交于 2019-12-03 16:14:01

I don't think Kafka is good for CQRS and Event sourcing yet, the way you described it, because it lacks a (simple) way of ensuring protection from concurrent writes. This article talks about this in details.

What I mean by the way you described it is the fact that you expect a command to generate zero or more events or to fail with an exception; this is the classical CQRS with Event sourcing. Most of the people expect this kind of Architecture.

You could have Event sourcing however in a different style. Your Command handlers could yield events for every command that is received (i.e. DeleteWasAccepted). Then, an Event handler could eventually handle that Event in an Event sourced way (by rebuilding Aggregate's state from its event stream) and emit other Events (i.e. ItemDeleted or ItemDeletionWasRejected). So, commands are fired-and-forget, sent async, the client does not wait for an immediate response. It waits however for an Event describing the outcome of its command execution.

An important aspect is that the Event handler must process events from the same Aggregate in a serial way (exactly once and in order). This can be implemented using a single Kafka Consumer Group. You can see about this architecture in this video.

A possible solution I came up with is to implement a sort of optimistic locking mechanism:

  1. Add an expectedVersion field on the commands
  2. Use the KTable Aggregator to increase the version of the aggregate snapshot for each handled event
  3. Reject commands if the expectedVersion doesn't match the snapshot's aggregate version

This seems to provide the semantics I'm looking for

Please read this article by my colleague Jesper. Kafka is a great product but actually not a good fit at all for event sourcing

https://medium.com/serialized-io/apache-kafka-is-not-for-event-sourcing-81735c3cf5c

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!