-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network transactions tracking issue #692
Comments
Great write up! I think this covers basically everything. A few comments:
I was thinking of the
I'm inclined not to do this right now. If we do want to do this, it'll probably have to be done at the
Yeah - I wouldn't worry about this for now.
I think this is a separate issue and ideally we may want to run the note in different processes in a single machine anyways because the cost of restarting different components is different. For example, restarting RPC or block producer is current pretty cheap, but restarting the store is very expensive. So, if the RPC or block producer crush, ideally, we'd restart only them.
This could be solved by making the state of the
The way I'm thinking the The are some discussions in 0xPolygonMiden/miden-base#938 and 0xPolygonMiden/miden-base#831 (comment) about potentially modifying this interface. |
I think this gets complicated quickly because part of the state also exists in the mempool and in gRPC comms. Ideally there would be one single source of truth ( As an example: part of the overall state are the gRPC comms between mempool, block-producer and the tx builder. e.g. if the tx builder dies while it is receiving a new note, or status of network tx then that information is lost forever because the mempool won't resend it. The issue is that these comms aren't atomic so there are many possible race failures. A similar example for #23 is a Some options:
The issue is that the more components store their view locally, the more complex recovery becomes (I think). But we can punt on that for now I think. |
Agree that this could get complicated, but also not sure if we can push everything to the So, overall, I think our persistence (at least for the |
I think so long as we assign correct ownership to the data then we can make it work. This implies that |
I would say that if Though, maybe the solution is that |
This likely still needs feedback; so I'll avoid adding (most) subissues just yet.
This serves as the tracking issue for the initial implementation of network transactions.
Prior discussion held in #606 with both major threads containing valuable information. I'll summarize the results below but still recommend reading through the discussion.
What are network transactions
Network transactions are transactions that are executed and proven by the node itself. Users can trigger these by creating notes targeting network accounts. The node will detect these and create network transactions to execute/consume them.
Design overview
Execution and proving of these network notes will be handled by a new node component
tx-builder
(real name pending).This component will receive network notes from the mempool which are then queued until they can be successfully executed against the target account, and submitted as a normal transaction via RPC.
The
tx-builder
can fetch the required account state from the store via RPC. Ideally we would interact with the RPC component so that we don't have write access to the store and get tx proof verification via rpc in-line. Using thestore
andblock-producer
clients is also an option though.Initial simplifications
Restrictions we impose to simplify implementations which may be relaxed in the future once things stabilize and we come up with better designs.
Network accounts
Network accounts will be fully managed by the network. This means the
tx-builder
can assume that only itself will update network accounts. Otherwise we would have to account for external transactions changing the account state under our feet. With this simplification we can fetch the latest account state from the store and know that it cannot change without our intervention.This means we only need the account state for in progress/queue network notes.
Note status
Only consider network notes that have been committed. This comes at the expense of extra latency and removing the possibility of note erasure (both user tx and network tx consuming it in the same block).
This simplification means we don't need to worry about the user tx reverting. The mempool can send network notes to the builder as they are committed.
Execution stacking
The builder could consume multiple notes within the same transaction for efficiency, but it is currently not possible to build a transaction by consuming notes incrementally. aka we have to apply all notes at once, atomically. This can be revisited once we can execute notes incrementally.
Note types
Notes will be restricted to some subset of known notes initially. This makes it simpler to validate that a note can be executed against an account.
Open questions
Validation
Multi-account txs
If txs can affect multiple accounts one day, is this something we would want to add to network txs? Seems unlikely at first blush.
Fault tolerance
What happens if some subset of the components fail? Currently we run all components in the same process, so one failure brings down all of them. However in the future this is not ideal and each component would ideally survive and somehow reset on its own. This isn't only an issue for this feature, but rather a broader concern. I think for now we can ignore piecemeal failure.
We do need to handle the node restarting. With the current design of only allowing committed network notes, we can fetch the set of initial notes to process from the store.
This does mean we could get different results though. Such a query could also return network notes that we previously discarded as invalid (e.g. targeting a non-existent account), but are valid now (account now exists). We would need to decide on the policy for this.
RPC design
How complex can network account state become? Will it require a streaming endpoint, or can we assume that a single request will retrieve the full state at once.
This will inform our RPC endpoint design (if any), since it may diverge from the normal RPC account state retrieval - which needs to handle streaming of some kind.
If the builder gets direct access to the
store
andblock-producer
(instead of viaRPC
), then it may make sense to begin splitting thestore
gRPC definitions into multiple services:Relaxing simplifications aka future expansions
Execution stacking can be added whenever its possible to incrementally build a tx. We should consider whether we need more information from the mempool wrt ordering of the origin txs. e.g. the network tx will be dependent on all origin txs in the stacked set.
Send network notes as soon as they are accepted which means we have to account for user tx reverts. This implies we must track account deltas and tx/note dependencies in the builder. Though this might already be required to handle network tx reverts.
Expand note types. Fairly self-explanatory, probably just handled on a case-by-case basis.
Allowing external mutation of network accounts. This would likely require closer coupling between the mempool and the builder. The mempool could inform the builder whenever a network account's state changes which in turn could use that to make decisions on its internal set of pending network txs.
The text was updated successfully, but these errors were encountered: