Note
The shipping/deploying process and the Prover
itself are under development.
The prover consists of two main components: handling incoming proving data from the L2 proposer
, specifically the prover_server
component, and the zkVM
. The prover_client
is responsible for this first part, while the zkVM
serves as a RISC-V emulator executing code specified in crates/l2/prover/zkvm/interface/guest/src
.
Before the zkVM
code (or guest), there is a directory called interface
, which indicates that we access the zkVM
through the "interface" crate.
In summary, the prover_client
manages the inputs from the prover_server
and then "calls" the zkVM
to perform the proving process and generate the groth16
ZK proof.
The Prover Server
monitors requests for new jobs from the Prover Client
, which are sent when the prover is available. Upon receiving a new job, the Prover generates the proof, after which the Prover Client
sends the proof back to the Prover Server
.
sequenceDiagram
participant zkVM
participant ProverClient
participant ProverServer
ProverClient->>+ProverServer: ProofData::Request
ProverServer-->>-ProverClient: ProofData::Response(block_number, ProverInputs)
ProverClient->>+zkVM: Prove(ProverInputs)
zkVM-->>-ProverClient: Creates zkProof
ProverClient->>+ProverServer: ProofData::Submit(block_number, zkProof)
ProverServer-->>-ProverClient: ProofData::SubmitAck(block_number)
Dependencies:
- RISC0
curl -L https://risczero.com/install | bash
rzup install cargo-risczero 1.2.0
- SP1
curl -L https://sp1up.succinct.xyz | bash
sp1up --version 4.1.0
- Pico
cargo +nightly install --git https://github.com/brevis-network/pico pico-cli
rustup install nightly-2024-11-27
rustup component add rust-src --toolchain nightly-2024-11-27
- SOLC
After installing the toolchains, a quick test can be performed to check if we have everything installed correctly.
To test the zkvm
execution quickly, the following test can be run:
cd crates/l2/prover
Then run any of the targets:
make perf-pico
make perf-risc0
make perf-sp1
To run the blockchain (proposer
) and prover in conjunction, start the prover_client
, use the following command:
make init-prover T="prover_type (pico,risc0,sp1,exec) G=true"
select the "exec" backend whenever it's not desired to generate proofs, like in a CI environment.
Note
Used for development purposes.
cd crates/l2
make rm-db-l2 && make down
- It will remove any old database, if present, stored in your computer. The absolute path of libmdbx is defined by data_dir.
cp configs/sequencer_config_example.toml configs/sequencer_config.toml
→ check if you want to change any config.cp configs/prover_client_config_example.toml configs/prover_client_config.toml
→ check if you want to change any config.make init
- Make sure you have the
solc
compiler installed in your system. - Init the L1 in a docker container on port
8545
. - Deploy the needed contracts for the L2 on the L1.
- Start the L2 locally on port
1729
.
- Make sure you have the
- In a new terminal →
make init-prover T=(sp1,risc0,pico,exec)
.
After this initialization we should have the prover running in dev_mode
→ No real proofs.
Steps for Ubuntu 22.04 with Nvidia A4000:
- Install
docker
→ using the Ubuntu apt repository- Add the
user
you are using to thedocker
group → command:sudo usermod -aG docker $USER
. (needs reboot, doing it after CUDA installation) id -nG
after reboot to check if the user is in the group.
- Add the
- Install Rust
- Install RISC0
- Install CUDA for Ubuntu
- Install
CUDA Toolkit Installer
first. Then thenvidia-open
drivers.
- Install
- Reboot
- Run the following commands:
sudo apt-get install libssl-dev pkg-config libclang-dev clang
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
To test the zkvm
proving process using a gpu
quickly, the following test can be run:
cd crates/l2/prover
Then run any of the targets:
make perf-pico-gpu
make perf-risc0-gpu
make perf-sp1-gpu
Two servers are required: one for the prover
and another for the proposer
. If you run both components on the same machine, the prover
may consume all available resources, leading to potential stuttering or performance issues for the proposer
/node
.
- The number 1 simbolizes a machine with GPU for the
prover_client
. - The number 2 simbolizes a machine for the
sequencer
/L2 node itself.
prover_client
/zkvm
→ prover with gpu, make sure to have all the required dependencies described at the beginning of Gpu Mode section.cd ethrex/crates/l2
cp configs/prover_client_config_example.toml configs/prover_client_config.toml
and change theprover_server_endpoint
with machine's2
ip and make sure the port matches the one defined in machine 2.
The important variables are:
[prover_client]
prover_server_endpoint=<ip-address>:3900
Finally
, to start theprover_client
/zkvm
, run:make init-prover T=(sp1,risc0,pico,exec) G=true
prover_server
/proposer
→ this server just needs rust installed.cd ethrex/crates/l2
cp configs/sequencer_config_example.toml configs/sequencer_config.toml
and change the addresses and the following fields:- [prover_server]
listen_ip=0.0.0.0
→ Used to handle TCP communication with other servers from any network interface.
- The
COMMITTER
andPROVER_SERVER_VERIFIER
must be different accounts, theDEPLOYER_ADDRESS
as well as theL1_WATCHER
may be the same account used by theCOMMITTER
. - [deployer]
salt_is_zero=false
→ set to false to randomize the salt.
sp1_deploy_verifier = true
overwritessp1_contract_verifier
. Check if the contract is deployed in your preferred network or set totrue
to deploy it.risc0_contract_verifier
- Check the if the contract is present on your preferred network.
sp1_contract_verifier
- It can be deployed.
- Check the if the contract is present on your preferred network.
pico_contract_verifier
- It can be deployed.
- Check the if the contract is present on your preferred network.
- Set the [eth]
rpc_url
to any L1 endpoint.
- [prover_server]
Note
Make sure to have funds, if you want to perform a quick test 0.2[ether]
on each account should be enough.
Finally
, to start theproposer
/l2 node
, run:make rm-db-l2 && make down
make deploy-l1 && make init-l2
Configuration is done through environment variables. The easiest way to configure the ProverClient is by creating a prover_client_config.toml
file and setting the variables there. Then, at start, it will read the file and set the variables.
The following environment variables are available to configure the Proposer consider looking at the provided prover_client_config_example.toml:
The following environment variables are used by the ProverClient:
CONFIGS_PATH
: The path where thePROVER_CLIENT_CONFIG_FILE
is located at.PROVER_CLIENT_CONFIG_FILE
: The.toml
that contains the config for theprover_client
.PROVER_ENV_FILE
: The name of the.env
that has the parsed.toml
configuration.PROVER_CLIENT_PROVER_SERVER_ENDPOINT
: Prover Server's Endpoint used to connect the Client to the Server.
The following environment variables are used by the ProverServer:
PROVER_SERVER_LISTEN_IP
: IP used to start the Server.PROVER_SERVER_LISTEN_PORT
: Port used to start the Server.PROVER_SERVER_VERIFIER_ADDRESS
: The address of the account that sends the zkProofs on-chain and interacts with theOnChainProposer
verify()
function.PROVER_SERVER_VERIFIER_PRIVATE_KEY
: The private key of the account that sends the zkProofs on-chain and interacts with theOnChainProposer
verify()
function.
Note
The PROVER_SERVER_VERIFIER
account must differ from the COMMITTER_L1
account.