event-sourcing

CQRS (event sourcing): Projections with multiple aggregates

百般思念 提交于 2019-12-04 12:14:03
问题 I have a question regarding projections involving multiple aggregates on a CQRS architecture. For example sake, suppose I have two aggregates WorkItem and Developer and that the following events happen sequentially (but not immediately) WorkItemCreated (workItemId) WorkItemTitleChanged (workItemId, title) DeveloperCreated (developerId) DeveloperNameChanged (developerId, name) WorkItemAssigned (workitemId, DeveloperId) I wish to create a projection which is as "inner join" of developer

Relational database schema for event sourcing

泪湿孤枕 提交于 2019-12-04 11:44:35
问题 I am trying to store domain events in a postgres database. I am not sure in many things, and I don't want to redesign this structure later, so I am seeking for guidance from people who have experience with event sourcing. I have currently the following table: domain events version - or event id, integer sequence, helps to maintain order by replays type - event type, probably classname with namespace aggregate - aggregate id, probably random string for each aggregate timestamp - when the event

Event Sourcing command or event from external system?

ε祈祈猫儿з 提交于 2019-12-04 11:12:52
问题 In most cases I understand the distinction between a command and an event in a CQRS + ES system. However, there is one situation that I can't figure out. Suppose I am building a personal finance tracking system, where a user can enter debits/credits. Clearly these are commands, and once they are validated the domain model gets updated and an event is published. However, suppose that credit/debit information also comes directly from external systems e.g. the user's florist sends a message that

Generating events from SQL server

夙愿已清 提交于 2019-12-04 10:53:50
I am looking for a best practice or example of how I might be able to generate events for all update events on a given SQL Server 2008 R2 db. To be more descriptive, I am working on a POC where I would essentially publish update events to a queue (RabbitMq in my case) that could then be consumed by various consumers. This would be the first part of implementing a CQRS query-only data model via event sourcing. By placing on the que anybody could then subscribe to these events for replication into any number of query-only data models. This part is clear and fairly well-defined. the problem I am

Event Sourcing: Events that trigger others & rebuilding state

做~自己de王妃 提交于 2019-12-04 08:42:58
问题 I'm struggling to get my head around what should happen when rebuilding the model by replaying events from the EventStore, in particular when events may trigger other events to happen. For example, a user who has made 10 purchases should be promoted to a preferred customer and receive an email offering them certain promotions. We clearly don't want the email to be sent every time we rebuild the model for that user, but how do we stop this from happening when we replay our 10th

Logstash -> Elasticsearch - update denormalized data

青春壹個敷衍的年華 提交于 2019-12-04 07:14:18
Use case explanation We have a relational database with data about our day-to-day operations. The goal is to allow users to search the important data with a full-text search engine. The data is normalized and thus not in the best form to make full-text queries, so the idea was to denormalize a subset of the data and copy it in real-time to Elasticsearch, which allows us to create a fast and accurate search application. We already have a system in place that enables Event Sourcing of our database operations (inserts, updates, deletes). The events only contains the changed columns and primary

CQRS Read Model Design when Event Sourcing with a Parent-Child-GrandChild… relationship

◇◆丶佛笑我妖孽 提交于 2019-12-04 03:08:22
I'm in the process of writing my first CQRS application, let's say my system dispatches the following commands: CreateContingent (Id, Name) CreateTeam (Id, Name) AssignTeamToContingent (TeamId, ContingentId) CreateParticipant (Id, Name) AssignParticipantToTeam (ParticipantId, TeamId) Currently, these result in identical events, just worded in the past tense (ContingentCreated, TeamCreated, etc) but they contain the same properties. (I'm not so sure that is correct and is one of my questions) My issue lies with the read models. I have a Contingents read model, that subscribes to

How to approach the Q in CQRS when doing Event Sourcing with Akka?

杀马特。学长 韩版系。学妹 提交于 2019-12-03 21:19:53
问题 Is there a good way of doing CQRS when combined with Event Sourcing? One way I thought about was doing this in the Command handler (of a Persistent Actor) as soon as the Command was turned into an Event and persisted to the event log (these Events represent the Write model), I would send the event using the event bus to interested subscribing query actors so they can update their Query model. The other way I was thinking (provided the journal supports it) is to use persistence queries (via

EventSourced Saga Implementation

孤街浪徒 提交于 2019-12-03 16:16:21
I have written an Event Sourced Aggregate and now implemented an Event Sourced Saga... I have noticed the two are similair and created an event sourced object as a base class from which both derive. I have seen one demo here http://blog.jonathanoliver.com/cqrs-sagas-with-event-sourcing-part-ii-of-ii/ but feel there may be an issue as Commands could be lost in the event of a process crash as the sending of commands is outside the write transaction? public void Save(ISaga saga) { var events = saga.GetUncommittedEvents(); eventStore.Write(new UncommittedEventStream { Id = saga.Id, Type = saga

Event sourcing with Kafka streams

妖精的绣舞 提交于 2019-12-03 16:14:01
I'm trying to implement a simple CQRS/event sourcing proof of concept on top of Kafka streams (as described in https://www.confluent.io/blog/event-sourcing-using-apache-kafka/ ) I have 4 basic parts: commands topic, which uses the aggregate ID as the key for sequential processing of commands per aggregate events topic, to which every change in aggregate state are published (again, key is the aggregate ID). This topic has a retention policy of "never delete" A KTable to reduce aggregate state and save it to a state store events topic stream -> group to a Ktable by aggregate ID -> reduce