Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(l1): process blocks in batches when syncing and importing #2174

Open
wants to merge 60 commits into
base: main
Choose a base branch
from

Conversation

MarcosNicolau
Copy link
Contributor

@MarcosNicolau MarcosNicolau commented Mar 7, 2025

Motivation
Accelerate syncing!

Description
This PR introduces block batching during full sync:

  1. Instead of storing and computing the state root for each block individually, we now maintain a single state tree for the entire batch, committing it only at the end. This results in one state trie per n blocks instead of one per block (we'll need less storage also).
  2. The new full sync process:
    • Request 1024 headers
    • Request 1024 block bodies and collect them
    • Once all blocks are received, process them in batches using a single state trie, which is attached to the last block.
  3. Blocks are now stored in a single transaction.
  4. State root, receipts root, and request root validation are only required for the last block in the batch.
  5. The new add_blocks_in_batch function includes a flag, should_commit_intermediate_tries. When set to true, it stores the tries for each block. This functionality is added to make the hive test pass. Currently, this is handled by verifying if the block is within the STATE_TRIES_TO_KEEP range. In a real syncing scenario, my intuition is that it would be better to wait until we are fully synced and then we would start storing the state of the new blocks and pruning when we reach STATE_TRIES_TO_KEEP.
  6. Throughput when syncing is now measured per batches.
  7. A new command was added to import blocks in batch

Considerations:

  1. Optimize account updates: Instead of inserting updates into the state trie after each block execution, batch them at the end, merging repeated accounts to reduce insertions and improve performance (see Optimize account updates in add_blocks_in_batch #2216) Closes Optimize account updates in add_blocks_in_batch #2216.
  2. Improve transaction handling: Avoid committing storage tries to the database separately. Instead, create a single transaction for storing receipts, storage tries, and blocks. This would require additional abstractions for transaction management (see Write batch of blocks in a single transaction in add_blocks_in_batch #2217).
  3. This isn't working for levm backend we need it to cache the executions state and persist it between them, as we don't store anything until the final of the batch (see Make add_blocks_in_batch work for LEVM #2218).
  4. In ci(core): benchmark for batch block import  #2210 a new ci is added to run a bench comparing main and head branch using import-in-batch

Closes None

@MarcosNicolau MarcosNicolau requested a review from a team as a code owner March 7, 2025 12:13
Copy link

github-actions bot commented Mar 7, 2025

Lines of code report

Total lines added: 477
Total lines removed: 5
Total lines changed: 482

Detailed view
+----------------------------------------------+-------+------+
| File                                         | Lines | Diff |
+----------------------------------------------+-------+------+
| ethrex/cmd/ethrex/cli.rs                     | 199   | +3   |
+----------------------------------------------+-------+------+
| ethrex/cmd/ethrex/subcommands/import.rs      | 61    | -4   |
+----------------------------------------------+-------+------+
| ethrex/crates/blockchain/blockchain.rs       | 540   | +154 |
+----------------------------------------------+-------+------+
| ethrex/crates/common/trie/trie.rs            | 813   | +5   |
+----------------------------------------------+-------+------+
| ethrex/crates/networking/p2p/peer_handler.rs | 515   | -1   |
+----------------------------------------------+-------+------+
| ethrex/crates/networking/p2p/sync.rs         | 563   | +82  |
+----------------------------------------------+-------+------+
| ethrex/crates/storage/api.rs                 | 192   | +7   |
+----------------------------------------------+-------+------+
| ethrex/crates/storage/store.rs               | 1159  | +48  |
+----------------------------------------------+-------+------+
| ethrex/crates/storage/store_db/in_memory.rs  | 506   | +38  |
+----------------------------------------------+-------+------+
| ethrex/crates/storage/store_db/libmdbx.rs    | 1145  | +55  |
+----------------------------------------------+-------+------+
| ethrex/crates/storage/store_db/redb.rs       | 936   | +74  |
+----------------------------------------------+-------+------+
| ethrex/crates/vm/backends/mod.rs             | 318   | +11  |
+----------------------------------------------+-------+------+

// todo only execute transactions
// batch account updates to merge the repeated accounts
self.storage
.apply_account_updates_to_trie(&account_updates, &mut state_trie)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, I understand now. I was hoping we could just call: https://github.com/lambdaclass/lambda_ethereum_rust/blob/0acc5e28b861f88c30cebb6cbfe0230970df25ed/crates/vm/backends/revm.rs#L96 get_state_transitions only once.

We would need to add a execute_blocks inside vm.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can discuss this later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried this approach but it won't work without making larger modifications to the vm backend.

@MarcosNicolau MarcosNicolau marked this pull request as draft March 11, 2025 13:23
Comment on lines +443 to +444
*current_head = last_block.hash();
*search_head = last_block.hash();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: mutating the heads is a temporary fix so that we don't have to refactor the syncing implementation too much. When refactoring this module and separating snap sync from full sync, we should be able to make this more elegant.

}

/// Executes a block from a given vm instance an does not clear its state
fn execute_block_from_state(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need this? can't we just call execute_block (previously moving the find_parent_header part)?

Copy link
Contributor Author

@MarcosNicolau MarcosNicolau Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes because this one takes a vm in the param and does not clear its state after the block execution since it calls vm.execute_block_without_clearing_state instead of vm.execute_block().

@MarcosNicolau MarcosNicolau changed the title feat(l1): process blocks in batches when syncing and importing feat(l1): process blocks in batches when syncing and importing + store blocks in a single tx Mar 18, 2025
@MarcosNicolau MarcosNicolau changed the title feat(l1): process blocks in batches when syncing and importing + store blocks in a single tx feat(l1): process blocks in batches when syncing and importing Mar 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimize account updates in add_blocks_in_batch
4 participants