-
Notifications
You must be signed in to change notification settings - Fork 105
docs: v0.14 update for both internal and external #1990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
This file was deleted.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -3,18 +3,20 @@ title: "Architecture" | |
| sidebar_position: 2 | ||
| --- | ||
|
|
||
| # Node architecture | ||
| # Network architecture | ||
|
|
||
| The node itself consists of four distributed components: store, block-producer, network transaction builder, and RPC. | ||
| The network itself consists of five distributed components: store, block-producer, network transaction builder, validator, and RPC. | ||
|
|
||
| The components can be run on separate instances when optimised for performance, but can also be run as a single process | ||
| for convenience. The exception to this is the network transaction builder which can currently only be run as part of | ||
| the single process. At the moment both of Miden's public networks (testnet and devnet) are operating in single process | ||
| for convenience. At the moment both of Miden's public networks (testnet and devnet) are operating in single process | ||
| mode. | ||
|
|
||
| The inter-component communication is done using a gRPC API which is assumed trusted. In other words this _must not_ be | ||
| Inter-component communication is done using a gRPC API which is assumed trusted. In other words this _must not_ be | ||
| public. External communication is handled by the RPC component with a separate external-only gRPC API. | ||
|
|
||
| The image below shows a rough example of what a network architecture may look like. Only the more important data | ||
| flows are pictured to improve clarity. | ||
|
|
||
| [](../img/operator_architecture.svg) | ||
|
|
||
| ## RPC | ||
|
|
@@ -33,14 +35,18 @@ It can be trivially scaled horizontally e.g. with a load-balancer in front as sh | |
| The store is responsible for persisting the chain state. It is effectively a database which holds the current state of | ||
| the chain, wrapped in a gRPC interface which allows querying this state and submitting new blocks. | ||
|
|
||
| It receives new blocks from the block-producer, which it then submits to the validator for signing before it is committed | ||
| on chain. It then submits the block to the prover whereafter the block is marked as proven. Blocks therefore undergo | ||
| two levels of finalization, `committed` and then `proven`. | ||
|
|
||
| It expects that this gRPC interface is _only_ accessible internally i.e. there is an implicit assumption of trust. | ||
|
|
||
| ## Block-producer | ||
|
|
||
| The block-producer is responsible for aggregating received transactions into blocks and submitting them to the store. | ||
|
|
||
| Transactions are placed in a mempool and are periodically sampled to form batches of transactions. These batches are | ||
| proved, and then periodically aggregated into a block. This block is then proved and committed to the store. | ||
| proved, and then periodically aggregated into a block. This proposed block is then submitted to the store. | ||
|
Comment on lines
48
to
+49
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is "proposed block" the right phrasing here? Maybe it should be "constructed block" or "composed block". Also, I think this is missing the part that the block producer fist sends the block for a signature to the validator - and then, only once the validator signs the block, it is sent to the store. |
||
|
|
||
| Proof generation in production is typically outsourced to a remote machine with appropriate resources. For convenience, | ||
| it is also possible to perform proving in-process. This is useful when running a local node for test purposes. | ||
|
|
@@ -49,7 +55,7 @@ it is also possible to perform proving in-process. This is useful when running a | |
|
|
||
| The network transaction builder monitors the mempool for network notes, and creates transactions consuming these. | ||
| We call these network transactions and at present this is the only entity that is allowed to create such transactions. | ||
| This restriction is will be lifted in the future, but for now this component _must_ be enabled to have support for | ||
| This restriction may be lifted in the future, but for now this component _must_ be enabled to have support for | ||
| network transactions. | ||
|
|
||
| The mempool is monitored via a gRPC event stream served by the block-producer. | ||
|
|
@@ -66,3 +72,19 @@ number of failures, preventing resource exhaustion. The threshold can be set wit | |
| The builder also exposes an internal gRPC server that the RPC component uses to proxy debugging endpoints such as | ||
| `GetNoteError`. In bundled mode this is wired automatically; in distributed mode operators must set | ||
| `--ntx-builder.url` (or `MIDEN_NODE_NTX_BUILDER_URL`) on the RPC component. | ||
|
|
||
| ## Validator | ||
|
|
||
| The validator is responsible for verifying the integrity of the blockchain by signing new blocks before they can be commited. | ||
|
|
||
| At the moment this is implemented by having all transactions sent here to be re-executed to double-check their integrity. This | ||
| also guards against bugs in the proving or execution systems, by backing up the transactions and their private inputs. This | ||
| forms part of our training wheels while Miden is maturing. | ||
|
|
||
| The validator signs a new block if: | ||
|
|
||
| - all transactions were previously verified | ||
| - block proof is valid | ||
| - block delta matches the aggregated transaction deltas | ||
| - block header is valid and matches the data | ||
| - block builds on the current chain tip | ||
This file was deleted.
This file was deleted.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,11 +1,9 @@ | ||
| # Block Producer Component | ||
|
|
||
| The block-producer is responsible for ordering transactions into batches, and batches into blocks, and creating the | ||
| proofs for these. Proving is usually outsourced to a remote prover but can be done locally if throughput isn't | ||
| proofs for batches. Proving is usually outsourced to a remote prover but can be done locally if throughput isn't | ||
| essential, e.g. for test purposes on a local node. | ||
|
|
||
| It hosts a single gRPC endpoint to which the RPC component can forward new transactions. | ||
|
|
||
| The core of the block-producer revolves around the mempool which forms a DAG of all in-flight transactions and batches. | ||
| It also ensures all invariants of the transactions are upheld e.g. account's current state matches the transaction's | ||
| initial state, that all input notes are valid and unconsumed and that the transaction hasn't expired. | ||
|
|
@@ -17,8 +15,17 @@ the mempool where it can be included in a block. | |
|
|
||
| ## Block production | ||
|
|
||
| Proven batches are selected from the mempool periodically to form the next block. The block is then proven and committed | ||
| to the store. At this point all transactions and batches in the block are removed from the mempool as committed. | ||
| Proven batches are selected from the mempool periodically to form the next block. The block is then built and submitted | ||
| to the store, which ensures it gets signed by the validator before it is committed. At this point all transactions and | ||
| batches in the block are marked in the mempool as committed. | ||
|
Comment on lines
+18
to
+20
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe the block producer -> store flow is a bit inverted here. Specifically:
|
||
|
|
||
| ## Mempool data pruning | ||
|
|
||
| The mempool keeps the `N` most recent blocks locally, to allow incoming transactions a grace period so we can verify their | ||
| state against the store, and the local state deltas in the mempool. Without this overlap, we would constantly be racing | ||
| transaction check against the store with newly committed blocks. | ||
|
|
||
| After each now block, the `N+1`th oldest block and its batches and transactionsa are pruned from the mempool state. | ||
|
|
||
| ## Transaction lifecycle | ||
|
|
||
|
|
@@ -42,5 +49,5 @@ above lifecycle (which effectively shows the happy path). This can occur if: | |
|
|
||
| - The transaction expires before being included in a block. | ||
| - Any parent transaction is dropped (which will revert the state, invalidating child transactions). | ||
| - It causes proving or any part of block/batch creation to fail. This is a fail-safe against unforeseen bugs, removing | ||
| - It causes proving or any part of block/batch creation to fail repeatedly. This is a fail-safe against unforeseen bugs, removing | ||
| problematic (but potentially valid) transactions from the mempool to prevent outages. | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is missing a couple of links: