Skip to main content

DDD - operation on one aggregate that creates another aggregate [Resolved]

Let's say I am designing a TODO application and therefore have an aggregate root called Task. The business requires to keep a list of TaskLogEvent that provides them with a history of how the task changed over time. As the Task may have hundreds of these events, I will model this TaskLogEvent as a separate aggregate root (I do not want to load these elements every time and I am not using any lazy loading mechanism).

Now anytime I want to do task.complete() or any other opration modifying the Task I want to create a new TaskLogEvent. I have these options:

  1. Design a domain service that will make all changes to Task and create the event. Every comunication with Task would have to get through this service.
  2. Pass TaskLogEventRepository to any method in Task so that the Task itself can create the Event and save it into the repository.
  3. Let an application service handle this. I don't think that is a good idea.

What would be the ideal solution to this situation? Where did I make mistake in my thinking process?


Question Credit: Juraj Mlich
Question Reference
Asked July 20, 2019
Posted Under: Programming
34 views
2 Answers

Your reasoning on creating the TaskLog as a separate aggregate is correct and makes sense.

The responsibility of raising an event lies with the business logic, so prima facie, Option 1 would be the best choice. You would come close to this code structure if you follow DDD anyway, with a Task Application Service, Task Aggregate, and a Task Repository.

But this is a starting point. You should not be dealing with the TaskLogRepository as part of your Task aggregate:

  • You should bubble the event up as a Domain Event (for example, a TaskUpdatedEvent) from the Task Aggregate, after a successful operation of creating/modifying the Task.
  • The event would typically contain a comprehensive payload so that subscribers would not have to query the Task aggregate again.
  • A subscriber would be present on the other side, waiting for this event. When the event bubbles up, the subscriber retrieves it and fires the associated Application Service. The service would then process the event payload and do whatever is necessary (In your case, the Task would be to persist the event to the TaskLogRepository).
  • If there are other parts of your application that need to act on these events, they would have subscribed to this event as well. Your Domain Event pub-sub mechanism will handle broadcasting and retrieval.

One step further: Prescriptive Events

You could use these Events as the mechanism to effect a change on the Task aggregate. Let me explain.

When you affect a change on the Task aggregate and then generate an event after the transaction, the event would be a Descriptive event, which has info on what just happened. With Descriptive events, there are two representations of each change, so they can potentially diverge.

Instead, Prescriptive events would be events that you trigger to effect the change on the Task. You would only generate the event as part of the transaction, and subscribers would be responsible for modifying the Task aggregate. There is just one representation in this case, and you have good sync between the event that was stored and the change that was affected.

Further Reading:


One step further: Event Sourcing

You could use Event Sourcing mechanism to build the Task aggregate in real-time from events. With this approach, you have a 100% sync between the history of task modifications and the Task's current representation.

Event Sourcing as a mechanism by itself will require some understanding and experimentation, so you need to make an informed call. Read about it, but do not implement it unless you discover an actual need for it.

Further Reading:


credit: Subhash Bhushan
Answered July 20, 2019

Creation patterns are weird

Notice that, in this use case, you are changing two things; the Task aggregate itself, and also your collection/repository/stream of TaskLogEvents.

Twenty years ago, when "the" database meant some RDBMS, both of these would be written as part of the same database transaction. It would be the responsibility of the application layer to manage the transaction itself. The logic for copying data from the Task to the TaskLogEvent would live in the aggregate. That data might just invoke the TaskLogEvent constructor, or it might use a factory. The application layer would query the aggregate for the event(s), and would be responsible for storing them in the "repository".

See, for instance Udi Dahan Reliable Messaging without Distributed Transactions, or Pat Helland Data on the Outside vs Data on the Inside.

If the aggregate and the events are being written to different places, then things get to be more complicated. First, you now have two transactions for the application service to manage, and you have to worry about the implications of a failure after the first transaction is committed. It's otherwise not too different from the first case; you have an in memory domain model that works in a universe where everything is easy, and the application coordinating a storage protocol under the guidance of the domain model.

If you've looked into Growing Object Oriented Sofware, Guided by Tests, this separation should be a familiar one; an in memory brain that knows what to do talking to a bunch of dependencies that know how to do it.

Of your three listed choices, C is by far the easiest to work with over time, because you get a cleaner separation of the parts. The domain model can easily be lifted into other environments (for instance, test) where the concept of "transaction" doesn't exist, none of the I/O side effects pass through the model code, and so on.

But it's not nearly as seductive as the illusion that you can do everything in memory, and treat persistence as an after thought.


credit: VoiceOfUnreason
Answered July 20, 2019
Your Answer
D:\Adnan\Candoerz\CandoProject\vQA