Skip to content

Conversation

@maxkozlovsky
Copy link
Contributor

Since submit_eth_call_to_pool() is called from rust thread, blocking on fiber promises is not appropriate and can lead to performance issues. Move recovert_authorities() logic inside lambda executed in fiber pool.

Since eth call txn has single authority, parallelization logic inside recover_authorities() only intoduces overhead without benefits. Replace parallelization with single call to recover_authority().

Copilot AI review requested due to automatic review settings October 24, 2025 21:29
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the eth_call execution flow to improve performance by moving authority recovery logic from the synchronous path into the asynchronous fiber pool execution. The key motivation is to avoid blocking fiber promises from Rust threads, which can cause performance degradation.

Key changes:

  • Authority recovery logic moved inside the lambda submitted to the fiber pool
  • Replaced parallel recover_authorities() with sequential recover_authority() calls since eth_call transactions have a single authority
  • Removed unused execute_block.hpp include

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Chen-Yifan
Chen-Yifan previously approved these changes Oct 24, 2025
zander-xyz
zander-xyz previously approved these changes Nov 18, 2025
Since submit_eth_call_to_pool() is called from rust thread, blocking on
fiber promises is not appropriate and can lead to performance issues.
Move recovert_authorities() logic inside lambda executed in fiber pool.

Since eth call txn has single authority, parallelization logic inside
recover_authorities() only intoduces overhead without benefits. Replace
parallelization with single call to recover_authority().

Modify recover_senders_and_authorities() to recover senders and
authorities directly using recover_sender() and recover_authority()
instead of using the pool-based monad::recover_senders() and
monad::recover_authorities().

This avoids potential deadlock if all fibers in the pool are occupied,
since the function is called from within a fiber pool lambda context."

Refactor fiber pools to enable thread sharing between fiber groups.

Introduce FiberThreadPool and FiberGroup abstractions that allow multiple
fiber groups to share OS threads while maintaining separate task queues and
fiber limits.

Architecture:
  PriorityPool = FiberThreadPool + FiberGroup (isolated, dedicated threads)
  Shared setup = FiberThreadPool + multiple FiberGroups (shared threads)

Thread 0 runs a bootstrap fiber that handles fiber creation requests to
ensure fibers are created on threads with proper scheduler configuration.

Use trace_tx_exec_poo to parallelize trace transaction execution to
avoid deadlock because of using same pool.
@jhunsaker jhunsaker merged commit e87c786 into main Nov 18, 2025
8 checks passed
@jhunsaker jhunsaker deleted the max/moverecover branch November 18, 2025 17:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants