Skip to content

Conversation

@leviramsey
Copy link
Contributor

This seems like a useful invariant to keep. Disabling backtracking when far behind the wall-clock time preserves the motivation for #96.

@pvlugter
Copy link
Member

But when resuming backtracking after disabling when far behind, we don't want to continue from where it was originally? That would then need to run through every event while backtracking was disabled.

The expectation for disabling backtracking when the regular queries have also fallen further behind the wall clock time, is that we don't need backtracking because late arriving events should already be there for the regular queries.

def nextQueryFromTimestamp(backtrackingWindow: JDuration): Instant =
if (backtracking) {
if (latest.timestamp.minus(backtrackingWindow).isAfter(latestBacktracking.timestamp))
latest.timestamp.minus(backtrackingWindow)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the hypothesis is that a normal query moved the offset forward more than the backtracking window:

  • latestBacktracking received events until 12:08:07
  • normal query receives events until 12:10:20
  • then new backtracking query from 12:08:20
    • meaning that there is a gap of 3 seconds in the backtracking

However, that could happen after a restart too. After a restart the backtracking only goes back the backtracking window (plus backtrackingBehindCurrentTime). The assumption is that the backtracking window should cover all late arrivals.

I don't know if we could see if parts of the GSI was more far behind than the backtracking window.

Would it make sense to instead increase the backtracking window?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be useful if we can confirm the GSI was actually minutes behind. Maybe the AWS team have internal visibility into this.

Another scenario is around the backtracking behind-current-time (10 seconds by default), which actually sets how late events can be. If parts of the GSI are over 10 seconds behind on updates, the backtracking queries can move ahead of this, and events will be missed (rather than behind the backtracking window).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, that is even more likely. Is there a reason for why we have this as tight as 10 seconds for DynamoDB?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we've discussed increasing it. But looks like it was just copied over from r2dbc originally:

Having over 10 second delays would be possible. The GSI can also be under-provisioned and write-throttled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, or verify my thought, write throttling in itself isn't a problem if it holds back all persistence ids for a slice. It would be a problem if the GSI partially let some persistence ids move forward, while others are held back.

(and remember, here we are using single slice queries)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I see the same. It would be any kind of partial updates to the GSI within a slice that could create problems. I'd expect that changes could be at the level of DynamoDB's internal partitions, and throttling is applied within those partitions, so seems possible that there could be partition-level delays that would be partial for a slice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants