Skip to content

Conversation

tnull
Copy link
Contributor

@tnull tnull commented Sep 5, 2025

This is the second PR in a series of PRs adding persistence to lightning-liquidity (see #4058). As this is already >1000LoC, I now decided to put this up as an intermediary step instead of adding everything in one go.

In this PR we add the serialization logic for for the LSPS2 and LSPS5 service handlers as well as for the event queue. We also have LiquidityManager take a KVStore towards which it persists the respective peer states keyed by the counterparty's node id. LiquidityManager::new now also deserializes any previously-persisted state from that given KVStore.

We then have BackgroundProcessor drive persistence, skip persistence for unchanged LSPS2/LSPS5 PeerStates, and use async inline persistence for LSPS2ServiceHandler where needed.

This also adds a bunch of boilerplate to account for both KVStore and KVStoreSync variants, following the approach we previously took with OutputSweeper etc.

cc @martinsaposnic

@tnull tnull requested a review from TheBlueMatt September 5, 2025 14:31
@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Sep 5, 2025

👋 Thanks for assigning @TheBlueMatt as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@tnull tnull force-pushed the 2025-01-liquidity-persistence branch 2 times, most recently from 124211d to 26f3ce3 Compare September 5, 2025 14:41
@tnull tnull self-assigned this Sep 5, 2025
@tnull tnull added the weekly goal Someone wants to land this this week label Sep 5, 2025
@tnull tnull added this to the 0.2 milestone Sep 5, 2025
@tnull tnull moved this to Goal: Merge in Weekly Goals Sep 5, 2025
@tnull tnull force-pushed the 2025-01-liquidity-persistence branch 4 times, most recently from a98dff6 to d630c4e Compare September 5, 2025 14:58
Copy link

codecov bot commented Sep 5, 2025

Codecov Report

❌ Patch coverage is 64.91935% with 348 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.54%. Comparing base (981e95f) to head (4e4404d).
⚠️ Report is 17 commits behind head on main.

Files with missing lines Patch % Lines
lightning-liquidity/src/lsps2/service.rs 50.16% 141 Missing and 13 partials ⚠️
lightning-liquidity/src/manager.rs 75.60% 54 Missing and 6 partials ⚠️
lightning-liquidity/src/events/event_queue.rs 61.06% 47 Missing and 4 partials ⚠️
lightning-liquidity/src/lsps5/service.rs 69.47% 25 Missing and 4 partials ⚠️
lightning-liquidity/src/persist.rs 71.26% 16 Missing and 9 partials ⚠️
lightning-liquidity/src/lsps0/ser.rs 53.57% 10 Missing and 3 partials ⚠️
lightning-background-processor/src/lib.rs 76.08% 8 Missing and 3 partials ⚠️
lightning-liquidity/src/lsps1/client.rs 0.00% 2 Missing ⚠️
lightning-liquidity/src/lsps5/msgs.rs 87.50% 0 Missing and 2 partials ⚠️
lightning-liquidity/src/lsps5/url_utils.rs 91.66% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4059      +/-   ##
==========================================
- Coverage   88.60%   88.54%   -0.07%     
==========================================
  Files         176      178       +2     
  Lines      132126   133668    +1542     
  Branches   132126   133668    +1542     
==========================================
+ Hits       117072   118353    +1281     
- Misses      12380    12590     +210     
- Partials     2674     2725      +51     
Flag Coverage Δ
fuzzing 21.97% <25.98%> (+0.04%) ⬆️
tests 88.37% <64.41%> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tnull tnull force-pushed the 2025-01-liquidity-persistence branch from d630c4e to 70118e7 Compare September 5, 2025 15:15
@tnull tnull force-pushed the 2025-01-liquidity-persistence branch from 70118e7 to dd43edc Compare September 5, 2025 15:28
@martinsaposnic
Copy link
Contributor

this all LGTM.

I have a small concern: maybe I’m being a little paranoid, but read_lsps2_service_peer_states and read_lsps5_service_peer_states pull every entry from the KVStore into memory with no limit. That could lead to unbounded state, exhausting memory and crash. Maybe we can add a limit on how many entries we load into memory to protect against this dos?

not sure how realistic this is though. maybe an attacker could have access to or share the same storage with the victim, and they could dump effectively infinite data onto disk. in this scenario, probably the victim would be vulnerable to other attacks too, but still..

@tnull
Copy link
Contributor Author

tnull commented Sep 5, 2025

I have a small concern: maybe I’m being a little paranoid, but read_lsps2_service_peer_states and read_lsps5_service_peer_states pull every entry from the KVStore into memory with no limit. That could lead to unbounded state, exhausting memory and crash. Maybe we can add a limit on how many entries we load into memory to protect against this dos?

Reading state from disk (currently) happens on startup only, so crashing wouldn't be the worst thing, we would simply fail to start up properly. Some even argue that we need to panic if we hit any IO errors at this point to escalate to an operator. We could add some safeguard/upper bound, but I'm honestly not sure what it would protect against.

not sure how realistic this is though. maybe an attacker could have access to or share the same storage with the victim, and they could dump effectively infinite data onto disk. in this scenario, probably the victim would be vulnerable to other attacks too, but still..

Heh, well, if we assume the attacker has write access to our KVStore, we're very very screwed either way. Crashing could be the favorable outcome then, actually.

@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @TheBlueMatt! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@tnull tnull force-pushed the 2025-01-liquidity-persistence branch from dd43edc to f73146b Compare September 8, 2025 07:37
@@ -45,6 +46,10 @@ pub struct LSPS2GetInfoRequest {
pub token: Option<String>,
}

impl_writeable_tlv_based!(LSPS2GetInfoRequest, {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really want to have two ways to serialize all these types? Wouldn't it make more sense to just use the serde serialization we already have and wrap that so that it can't all be misused?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think I'd be in favor of using TLV serialization for our own persistence.

Note that the compat guarantees of LSPS0/the JSON/serde format might not exactly match what we require in LDK, and our Rust representation might also diverge from the pure JSON impl. On top of that JSON is of course much less efficient.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, is there some easy way to avoid exposing that in the public API, then? Maybe a wrapper struct oe extension trait for serialization somehow? Seems like kinda a footgun for users, I think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, is there some easy way to avoid exposing that in the public API, then? Maybe a wrapper struct oe extension trait for serialization somehow? Seems like kinda a footgun for users, I think?

Not quite sure I understand the footgun? You mean because these types then have Writeable as well as Serialize implementations on them and users might wrongly pick Writeable when they use the types independently from/outside of lightning-liquidity?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, for example. Someone who uses serde presumably has some wrapper that serde-writes Writeable structs and suddenly their code could read/compile totally fine and be reading the wrong kind of thing. If they have some less-used codepaths (eg writing Events before they process them and then removing them again after) they might not find immediately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, for example. Someone who uses serde presumably has some wrapper that serde-writes Writeable structs and suddenly their code could read/compile totally fine and be reading the wrong kind of thing.

I'm confused - Writeable is an LDK concept not connected to serde? Do you mean Serialize? But that also has completely separate API? So how would they trip up? You mean they'd confuse Writeable and Serialize?

) -> Pin<Box<dyn Future<Output = Result<(), lightning::io::Error>> + Send>> {
let outer_state_lock = self.per_peer_state.read().unwrap();
let mut futures = Vec::new();
for (counterparty_node_id, peer_state) in outer_state_lock.iter() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh? Why would we ever want to do a single huge persist pass and write every peer's state at once? Shouldn't we be doing this iteratively? Same applies in the LSPS2 service.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, only persisting what's needed/changed will be part of the next PR as it ties into how we wake the BP to drive persistence (cf. "Avoid re-persisting peer states if no changes happened (needs_persist flag everywhere)" bullet over at #4058 (comment)).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused why we're adding this method then? If its going to be removed in the next PR in the series we should just not add it in the first place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it's not gonna be removed, but extended: PeerState (here as well as in LSPS2) will gain a dirty/needs_persist flag and we'd simply skip persisting any entries that haven't been changed since the last persistence round.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like a weird design if we need to persist something immediately while its being operated on - we have the node in question why walk a whole peer list? Can you put up the followup code so we can see how its going to be used? Given this PR is mostly boilerplate I honestly wouldn't mind it being a bit bigger, as long as the code isn't too crazy.

Copy link
Contributor Author

@tnull tnull Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like a weird design if we need to persist something immediately while its being operated on - we have the node in question why walk a whole peer list?

Yes, this is why persist_peer_state is a separate method - for inline persistence where we already hold the lock to the peer state we'd just call that. For the general/eventual persistence the background processor task calls LiquidityManager::persist which calls through to the respective LSPS*ServiceHandler::persist methods which then only persists the entries marked dirty since the last persistence round.

Can you put up the followup code so we can see how its going to be used? Given this PR is mostly boilerplate I honestly wouldn't mind it being a bit bigger, as long as the code isn't too crazy.

Sure will do as soon as it's ready an in a coherent state, although I had hoped to land this PR this week.

@tnull tnull force-pushed the 2025-01-liquidity-persistence branch from f73146b to 2971982 Compare September 9, 2025 07:35
@tnull
Copy link
Contributor Author

tnull commented Sep 9, 2025

Rebased to address minor conflict.

@tnull tnull requested a review from TheBlueMatt September 10, 2025 07:22
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Responded to the outstanding comments, not quite sure I fully get all the rationale here.

@tnull tnull requested a review from TheBlueMatt September 10, 2025 12:41
@tnull tnull force-pushed the 2025-01-liquidity-persistence branch from 3d27775 to b362b71 Compare September 19, 2025 08:52
We add `KVStore` to `LiquidityManager`, which will be used in the next
commits. We also add a `LiquidityManagerSync` wrapper that wraps a the
`LiquidityManager` interface which will soon become async due to usage
of the async `KVStore`.
@tnull tnull force-pushed the 2025-01-liquidity-persistence branch 3 times, most recently from ca864cb to 5fb2ae3 Compare September 19, 2025 09:26
@tnull tnull requested a review from TheBlueMatt September 19, 2025 09:27
@tnull
Copy link
Contributor Author

tnull commented Sep 19, 2025

Please let me know if/when I can squash fixups.

@TheBlueMatt
Copy link
Collaborator

Feel free.

@ldk-reviews-bot
Copy link

🔔 2nd Reminder

Hey @TheBlueMatt @martinsaposnic! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

.. this is likely only temporary necessary as we can drop our own
`dummy_waker` implementation once we bump MSRV.
We add simple `persist` call to `LSPS2ServiceHandler` that sequentially
persist all the peer states under a key that encodes their node id.
We add simple `persist` call to `LSPS5ServiceHandler` that sequentially
persist all the peer states under a key that encodes their node id.
We add simple `persist` call to `EventQueue` that persists it under a
`event_queue` key.
We read any previously-persisted state upon construction of
`LiquidityManager`.
We read any previously-persisted state upon construction of
`LiquidityManager`.
We read any previously-persisted state upon construction of
`LiquidityManager`.
We let the background processor task regularly call
`LiquidityManger::persist`. We also change the semantics of the `Future`
for waking the background processor to also be used when we need
repersisting (which we'll do in the next commit).
.. we only persist the event queue if necessary and wake the BP to do so
when something changes.
.. we only persist the service handler if necessary.
.. to allow access in a non-async context
.. and wrap them accordingly for the `LSPS2ServiceHandlerSync` variant.
.. we only persist the service handler if necessary.
We add a simple test that runs the LSPS2 flow, persists, and ensures we
recover the service state after reinitializing from our `KVStore`.
We add a simple test that runs the LSPS5 flow, persists, and ensures we
recover the service state after reinitializing from our `KVStore`.
Previously, we'd persist peer states to the `KVStore`, but, while we
pruned them eventually from our in-memory state, we wouldn't remove it
from the `KVStore`.

Here, we change this and regularly prune and delete peer state entries
from the `KVStore`.
Note we still prune the state-internal data on peer disconnection, but
leave removal to our (BP-driven) async `persist` calls.
@tnull tnull force-pushed the 2025-01-liquidity-persistence branch from 5fb2ae3 to 4e4404d Compare September 20, 2025 18:28
@tnull
Copy link
Contributor Author

tnull commented Sep 20, 2025

Feel free.

Squashed.

> git diff-tree -U2 5fb2ae31d 4e4404d2f
>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
weekly goal Someone wants to land this this week
Projects
Status: Goal: Merge
Development

Successfully merging this pull request may close these issues.

lightning-liquidity: Eventually archive or remove persisted state lightning-liquidity Persistence Tracking Issue
4 participants