dendrite/src/github.com
Erik Johnston 35b628f5bf Handle duplicate kafka messages (#301)
The way we store the partition offsets for kafka streams means that when
we start after a crash we may get the last message we processed again.
This means that we have to be careful to ensure that the processing
handles consecutive duplicates correctly.
2017-10-16 13:20:24 +01:00
..
matrix-org/dendrite Handle duplicate kafka messages (#301) 2017-10-16 13:20:24 +01:00