-
Notifications
You must be signed in to change notification settings - Fork 419
Split out receive_htlcs
from the forwarding pipeline
#3973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Thanks for assigning @TheBlueMatt as a reviewer! |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3973 +/- ##
==========================================
- Coverage 88.61% 88.61% -0.01%
==========================================
Files 174 174
Lines 127640 127684 +44
Branches 127640 127684 +44
==========================================
+ Hits 113113 113148 +35
- Misses 12046 12056 +10
+ Partials 2481 2480 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
b360839
to
2f5e940
Compare
✅ Added second reviewer: @valentinewallace |
🔔 1st Reminder Hey @valentinewallace! This PR has been waiting for your review. |
🔔 2nd Reminder Hey @valentinewallace! This PR has been waiting for your review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LGTM!
Could you confirm one thing for me: if we fail to receive an HTLC that's destined for us, it will then be put into the forward_htlcs
map keyed with the scid of the previous hop? That's how I'm reading the behavior of fail_htlc_backwards_internal
atm. If that's the case, may want to note that on the forward_htlcs
field.
continue; | ||
} | ||
}, | ||
_ => {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: could debug_assert!(false)
here if only to indicate we should never hit this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I intentionally let all other cases fall through to the the previous behavior. In particular, it seems that trampoline forwards would also be added under SCID 0 but we wouldn't want to push them to receive_htlcs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't Trampoline forwards also be an AddHTLC
? So the bottom _
match arm would still be unreachable. It also seems like we should fail here since any failures against SCID 0 should fail anyway (we won't be able to fail cause no such channel exists?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't Trampoline forwards also be an
AddHTLC
? So the bottom_
match arm would still be unreachable.
Yes, they would. I guess it would be unreachable (or you could say the if short_channel_id == 0
is redundant), but the main point is that we just fall through to previous behavior.
It also seems like we should fail here since any failures against SCID 0 should fail anyway (we won't be able to fail cause no such channel exists?)
You mean fail the deserialization?
cad3d32
to
c6ba5ff
Compare
Yes, I think this is correct, and the same hold for |
}) => forward_info.outgoing_cltv_value += 1, | ||
_ => {}, | ||
} | ||
nodes[1].node.process_pending_update_add_htlcs(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this line gets removed in a later commit? It doesn't seem to belong in this commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't get removed though? And I put it in this commit as it's fixing a pre-existing bug that was surfaced by making the tests stricter: previously, pending_forwards
was simply empty here and we weren't checking anything. Will move it to a separate, commit at the beginning.
@@ -2537,7 +2539,7 @@ pub struct ChannelManager< | |||
/// See `ChannelManager` struct-level documentation for lock order requirements. | |||
pending_outbound_payments: OutboundPayments, | |||
|
|||
/// SCID/SCID Alias -> forward infos. Key of 0 means payments received. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could document that 0 actually means trampoline now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added. Although I wonder if it would make sense to also add a separate field for trampoline to finally completely get away from the magic number?
@@ -15091,6 +15124,8 @@ where | |||
} | |||
} | |||
|
|||
let receive_htlcs = self.receive_htlcs.lock().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't it break downgrades to only write the new vec, and not also write the receives in the legacy forward_htlcs
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, good point. I guess we'd need to start writing both for a while (also cf. #3973 (comment)), and make sure we only migrate HTLCs for which we don't already track the same prev_htlc_id
.
assert_eq!(nodes[1].node.forward_htlcs.lock().unwrap().len(), 1); | ||
if let Some((_, pending_forwards)) = | ||
nodes[1].node.forward_htlcs.lock().unwrap().iter_mut().next() | ||
{ | ||
assert_eq!(pending_forwards.len(), 1); | ||
match pending_forwards.get_mut(0).unwrap() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This repeated pattern really looks like it could be in a test util function or macro.
continue; | ||
} | ||
}, | ||
_ => {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't Trampoline forwards also be an AddHTLC
? So the bottom _
match arm would still be unreachable. It also seems like we should fail here since any failures against SCID 0 should fail anyway (we won't be able to fail cause no such channel exists?)
#[cfg(test)] | ||
pub(super) receive_htlcs: Mutex<Vec<HTLCForwardInfo>>, | ||
#[cfg(not(test))] | ||
receive_htlcs: Mutex<Vec<HTLCForwardInfo>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ISTM we should change the type here because we still have the issues from the combined map (panics in process_forward_htlcs
for receives and panics in process_receive_htlcs
for forwards) but now dont have the reason for the (one map). Also the panic messages in those methods are wrong now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I considered doing that, but chose not to as it would mean we'd have to keep the old types around to deserialize the legacy forwards on upgrade.
I agree it would be cleaner going forward to add an enum HTLCReceiveInfo
and keep copies of HTLCForwardInfo
/PendingAddHTLCInfo
as LegacyHTLCForwardInfo
and LegacyPendingAddHTLCInfo
that we can eventually drop after some time.
Do you agree with that approach?
@@ -16791,6 +16854,7 @@ where | |||
pending_intercepted_htlcs: Mutex::new(pending_intercepted_htlcs.unwrap()), | |||
|
|||
forward_htlcs: Mutex::new(forward_htlcs), | |||
receive_htlcs: Mutex::new(receive_htlcs), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Earlier in this method, we remove pending payments that are not present in the monitors, and ISTM we should be doing this for receive_htlcs
now as well
Previously, the `pending_forwards` in this test would simply be empty, having the test not run any of the subsequent logic. Here we add a necessary `process_pending_update_add_htlcs()` step, and will move to a more strict test logic in the following commits.
Previously, some of the tests manipulating `forward_htlcs` were just iterating, but not actually checking whether the expected entries were present and they were actually changed. Here we make these test cases more strict.
.. to keep changes in the following commits minimal.
Previously, we'd store receiving HTLCs side-by-side with HTLCs forwards in the `forwards_htlcs` field under SCID 0. Here, we opt to split out a separate `receive_htlcs` field, also omitting the 0 magic value.
e681129
to
e08d3dc
Compare
This is another preparatory step for receiver-side delays that we split out to err on the side of smaller, more reviewable PRs, especially since these changes are a nice cleanup in any case, IMO.
Previously, we'd store receiving HTLCs side-by-side with HTLCs forwards in the
forwards_htlcs
map under SCID 0.Here, we opt to split out a separate
receive_htlcs
field which cleans up the logic, also omitting the 0 magic value.Moreover, some of the tests manipulating
forward_htlcs
were previously just iterating, but not actually checking whether the expected entries were present and they were actually changed. Here we make these test cases stricter to ensure they'd able to catch any unwanted behavior we'd introduce while introducingreceive_htlcs
.