Stream market: payment channels, access control, and datastream pub/sub#20
Merged
Conversation
Member
lastmeta
commented
Mar 24, 2026
- Full reclaimChannel implementation (PATH A: sender has EVR, PATH B: Mundo-assisted)
- _grantChannelAccess: bridge between on-chain channel claims and Nostr access gates
- _channelPayForObservation: auto-opens channels and pays per observation
- stream_name on ChannelCommitment: per-stream payment grants (fixes multi-stream over-granting)
- Fix record_subscription resetting last_paid_seq on reconnect
- Pass encrypted flag through to add_publication
- Channel UI improvements (channels.html, dashboard.html)
- Full reclaimChannel implementation (PATH A: sender has EVR, PATH B: Mundo-assisted) - _grantChannelAccess: bridge between on-chain channel claims and Nostr access gates - _channelPayForObservation: auto-opens channels and pays per observation - stream_name on ChannelCommitment: per-stream payment grants (fixes multi-stream over-granting) - Fix record_subscription resetting last_paid_seq on reconnect - Pass encrypted flag through to add_publication - Channel UI improvements (channels.html, dashboard.html) Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Fixes over-granting after a payment gap — use commitment.pay_amount_sats instead of total cumulative paid so subscribers only get the observations they just paid for. Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
sendChannelPayment now converts receiver_pubkey to a P2PKH address via EvrmoreWallet.generateAddress before passing to _compileClaimOnP2SHMultiSigStart (which expects an address string, not a raw hex pubkey). Added receiver_nostr_pubkey column to channels table (with migration) so sendChannelPayment can pass it to publish_commitment for correct Nostr p-tag routing. openChannel and _channelHandleOpen both persist the value.
Channels are permanent — removed delete_channel entirely. When a channel is fully drained, the next observation auto-refunds the existing P2SH address via the new refundChannel method instead of opening a new channel. - Replaced all 4 delete_channel call sites with appropriate handling: settlement/tombstone/claim just log (remainder already 0), reclaim and tombstone zero remainder_sats to prevent txs against spent UTXOs - Added refundChannel: sends new SATORI to existing P2SH using stored redeem_script via producePaymentChannelFromScript - _channelPayForObservation now refunds exhausted channels instead of opening new ones; only opens if no channel to that provider exists
Prevents sellers from gaming buyers by flooding observations to drain channels. Buyer never pays more than once per half-cadence. If an observation arrives during cooldown, exactly one deferred payment is scheduled at cooldown end to signal the seller that the buyer is still subscribed. Streams with no cadence (null/0) are not rate-limited. Also includes: persistent channels (never deleted, refund when drained), refundChannel method, tombstone fallback zeroes remainder.
getReadyToSend() must be called before producePaymentChannelFromScript to ensure electrumx connection, SATORI divisibility, and UTXO set are populated. Without this, the wallet's divisibility stays at 0 after restart, causing roundSatsDownToDivisibility to return 0 and _gatherSatoriUnspents to fail with "not enough satori to send".
claimChannel: CMutableTransaction.deserialize() yields immutable CTxIn/CTxOut sub-objects, causing _compileClaimOnP2SHMultiSigEnd to crash with "Object is immutable" when setting scriptSig. Convert vin/vout to mutable after deserialize. sendChannelPayment: add getReadyToSend guard when wallet.divisibility is 0, as belt-and-suspenders alongside the satorilib default change.
… multi-relay verified
- Tombstone fallback: Alice claims + publishes tombstone only (no settlement), Bob receives tombstone, remainder zeroed 9700→0, channel persists - Auto-refund trigger: observation arrives with remainder=0, refundChannel auto-fires (txid ca42e825...), immediately pays (remainder 10000→9900) - Full cycle verified: claim → tombstone → zero → obs → auto-refund → auto-pay
Replace the single 5-minute polling loop with one asyncio task per active data source, each firing at exactly its own cadence. Scheduling is anchored to a UTC offset so sources spread across time and never drift regardless of fetch duration or neuron restarts. - Add offset_seconds column to data_sources (random default, capped at min(cadence, 24h)) - _networkDataSourceManager reconciles tasks with DB every 60s - _networkDataSourceWorker per-source loop with clock-anchored grid - _networkFetchOneDataSource extracted for testability - UI and API accept optional offset
Existing rows would get NULL (treated as 0) causing all sources to fire simultaneously. Now the migration assigns a random offset per source, respecting the min(cadence, 86400) cap.
Buyer side: store the last sent commitment in the channel DB and re-publish it when a paid subscription is stale (no observation received for 2× cadence). This acts as a "I'm online, resend" heartbeat to the seller. Seller side: after granting channel access on commitment receipt, immediately check local DB for the latest published observation and resend it to the subscriber. This creates a rapid catch-up loop: buyer pays → seller sends → buyer pays → seller sends → until current. Also adds get_observation_by_seq to NetworkDB and fixes test mock imports for relay_manager and lite_engine.
A fresh payment (remainder decreased) means the buyer just received the observation and is paying for the next one — resending would be redundant. Only resend when remainder_sats matches the prior stored commitment, which indicates the buyer is re-publishing as a "I'm back online" reminder after missing observations.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.