-
Notifications
You must be signed in to change notification settings - Fork 417
Locktimed packages fixes #3923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Locktimed packages fixes #3923
Conversation
In a number of tests we require available UTXOs to do HTLC anchor claims by bringing our own fees. We previously wrote that out in each test, which is somewhat verbose, so here we simply add a test utility that gives each node a full BTC in a single UTXO.
We have to prune locktimed packages when their inputs are spent, otherwise the notification of the watched outputs might be missed. This can lead to locktimed packages with spent inputs being added back to the pending claim requests in the future, and they are never cleaned up until node restart. Resolves: lightningdevkit#3859
👋 Thanks for assigning @tankyleo as a reviewer! |
When we have an outpoint to claim which is lock-timed and the locktime is reached, we add it to `OnchainTxHandler::claimable_outpoints` to indicate the outpoint is now being claimed. However, `claimable_outpoints` is supposed to track when the outpoint first appeared on chain so that we can remove the claim if the outpoint is reorged out. Sadly, in the handling for lock-timed packages, we incorrectly stored the current height in `claimable_outpoints`, causing such claims to be removed in case of a reorg right after they were generated, even if the output we intend to claim isn't removed at all. Here we start tracking when the outpoint we're spending was created in `PackageSolvingData`'s constituent types. While we could have tracked this information in `PackageTemplate`, it would preclude later merging packages that are spending outpoints included in different blocks, which we don't necessarily want to do.
When we have an outpoint to claim which is lock-timed and the locktime is reached, we add it to `OnchainTxHandler::claimable_outpoints` to indicate the outpoint is now being claimed. However, `claimable_outpoints` is supposed to track when the outpoint first appeared on chain so that we can remove the claim if the outpoint is reorged out. Sadly, in the handling for lock-timed packages, we incorrectly stored the current height in `claimable_outpoints`, causing such claims to be removed in case of a reorg right after they were generated, even if the output we intend to claim isn't removed at all. Here we use the creation-height tracking added in the previous commit to actually address the issue, using the tracked height when adding a claim to `OnchainTxHandler::claimable_outpoints`. In cases where we have no information, we continue to use the current height, retaining the issue for locktimed packages on upgrades, but this simplifies cases where we actually don't have the information available anyway.
This adds a single test which exercises both the ability to prune locktimed packages when inputs are spent as well as the creation-height tracking for locktimed packages.
ff95a56
to
21be9c5
Compare
@@ -889,9 +889,10 @@ impl<ChannelSigner: EcdsaChannelSigner> OnchainTxHandler<ChannelSigner> { | |||
// Because fuzzing can cause hash collisions, we can end up with conflicting claim | |||
// ids here, so we only assert when not fuzzing. | |||
debug_assert!(cfg!(fuzzing) || self.pending_claim_requests.get(&claim_id).is_none()); | |||
for k in req.outpoints() { | |||
log_info!(logger, "Registering claiming request for {}:{}", k.txid, k.vout); | |||
self.claimable_outpoints.insert(k.clone(), (claim_id, conf_height)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we can get rid of conf_height
now
for (k, outpoint_confirmation_height) in req.outpoints_and_creation_heights() { | ||
let creation_height = outpoint_confirmation_height.unwrap_or(conf_height); | ||
log_info!(logger, "Registering claiming request for {}:{}, which exists as of height {creation_height}", k.txid, k.vout); | ||
self.claimable_outpoints.insert(k.clone(), (claim_id, creation_height)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, shouldn't we prefer signed_locktime
over creation_height
for outpoints that have one though? While the outpoint hasn't been reorged out, claiming it is no longer possible once the block at signed_locktime
is disconnected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practice I think this would just result in us broadcasting things that cannot enter the mempool until we get back to the expected height. If we have any other claims that were merged into the same package for whatever reason, and they are still valid at the disconnected block height, then this would be a greater issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that when things time out via claimable_outpoints
, we don't stop claiming them, we remove them. We don't get them back after a reorg at that point. We could move to pushing things into the locked-packages vec after a block-disconnect, but that seems like a bigger change?
assert!(nodes[0].node.get_and_clear_pending_events().is_empty()); | ||
|
||
if reorg_out { | ||
// Reorg out bs_htlc_spend_tx, letting node A the claim all the HTLCs instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Reorg out bs_htlc_spend_tx, letting node A the claim all the HTLCs instead. | |
// Reorg out bs_htlc_spend_tx, letting node A claim all the HTLCs instead. |
disconnect_blocks(&nodes[0], ANTI_REORG_DELAY - 2); | ||
assert_eq!(nodes[0].tx_broadcaster.txn_broadcast().len(), 0); | ||
|
||
// As soon as bs_htlc_spend_tx is disconnected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete comment?
👋 The first review has been submitted! Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer. |
See individual commits for more. This is #3860 with an additional issue I found while writing a test for it fixed.