Creation patterns are weird
Notice that, in this use case, you are changing two things; the
Task aggregate itself, and also your collection/repository/stream of
Twenty years ago, when "the" database meant some RDBMS, both of these would be written as part of the same database transaction. It would be the responsibility of the application layer to manage the transaction itself. The logic for copying data from the
Task to the
TaskLogEvent would live in the aggregate. That data might just invoke the
TaskLogEvent constructor, or it might use a factory. The application layer would query the aggregate for the event(s), and would be responsible for storing them in the "repository".
See, for instance Udi Dahan Reliable Messaging without Distributed Transactions, or Pat Helland Data on the Outside vs Data on the Inside.
If the aggregate and the events are being written to different places, then things get to be more complicated. First, you now have two transactions for the application service to manage, and you have to worry about the implications of a failure after the first transaction is committed. It's otherwise not too different from the first case; you have an in memory domain model that works in a universe where everything is easy, and the application coordinating a storage protocol under the guidance of the domain model.
If you've looked into Growing Object Oriented Sofware, Guided by Tests, this separation should be a familiar one; an in memory brain that knows what to do talking to a bunch of dependencies that know how to do it.
Of your three listed choices, C is by far the easiest to work with over time, because you get a cleaner separation of the parts. The domain model can easily be lifted into other environments (for instance, test) where the concept of "transaction" doesn't exist, none of the I/O side effects pass through the model code, and so on.
But it's not nearly as seductive as the illusion that you can do everything in memory, and treat persistence as an after thought.