Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -71,11 +71,8 @@ import org.slf4j.Logger
else latest

def nextQueryFromTimestamp(backtrackingWindow: JDuration): Instant =
if (backtracking) {
if (latest.timestamp.minus(backtrackingWindow).isAfter(latestBacktracking.timestamp))
latest.timestamp.minus(backtrackingWindow)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the hypothesis is that a normal query moved the offset forward more than the backtracking window:

  • latestBacktracking received events until 12:08:07
  • normal query receives events until 12:10:20
  • then new backtracking query from 12:08:20
    • meaning that there is a gap of 3 seconds in the backtracking

However, that could happen after a restart too. After a restart the backtracking only goes back the backtracking window (plus backtrackingBehindCurrentTime). The assumption is that the backtracking window should cover all late arrivals.

I don't know if we could see if parts of the GSI was more far behind than the backtracking window.

Would it make sense to instead increase the backtracking window?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be useful if we can confirm the GSI was actually minutes behind. Maybe the AWS team have internal visibility into this.

Another scenario is around the backtracking behind-current-time (10 seconds by default), which actually sets how late events can be. If parts of the GSI are over 10 seconds behind on updates, the backtracking queries can move ahead of this, and events will be missed (rather than behind the backtracking window).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, that is even more likely. Is there a reason for why we have this as tight as 10 seconds for DynamoDB?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we've discussed increasing it. But looks like it was just copied over from r2dbc originally:

Having over 10 second delays would be possible. The GSI can also be under-provisioned and write-throttled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, or verify my thought, write throttling in itself isn't a problem if it holds back all persistence ids for a slice. It would be a problem if the GSI partially let some persistence ids move forward, while others are held back.

(and remember, here we are using single slice queries)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I see the same. It would be any kind of partial updates to the GSI within a slice that could create problems. I'd expect that changes could be at the level of DynamoDB's internal partitions, and throttling is applied within those partitions, so seems possible that there could be partition-level delays that would be partial for a slice.

else latestBacktracking.timestamp
} else latest.timestamp
if (backtracking) latestBacktracking.timestamp
else latest.timestamp

def nextQueryToTimestamp: Option[Instant] = {
if (backtracking) Some(latest.timestamp)
Expand Down
Loading