Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute and provide proof at client #24

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 5 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,17 @@ reqwest = "0.12.12"
async-trait = "0.1.85"
linked_list_allocator = "0.10.5"
bytes = "1.9.0"
num = "0.4"

# General
sha2 = { version = "0.10.8", default-features = false }
c-kzg = { version = "2.0.0", default-features = false }
anyhow = { version = "1.0.95", default-features = false }
thiserror = { version = "2.0.9", default-features = false }
rust-kzg-bn254 = { version = "0.2.1", default-features = false }
rust-kzg-bn254 = { git = "https://github.com/Layr-Labs/rust-kzg-bn254", rev = "4ad14ea4ce9473e13ed6437140fcbbff3a8ccce1", default-features = false }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should talk with anup about publishing new versions of the rust-kzg-bn254 lib so we don't need to rely on git dependencies like this (they tend to cause more dependency problems in my experience)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to have all imporant issues resolved


ark-bn254 = "0.5.0"
ark-ff = { version = "0.5.0", features = ["parallel"] }

# Tracing
tracing-loki = "0.2.5"
Expand Down
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@

Hokulea is a library to provide the altda providers for a derivation pipeline built with [kona](https://github.com/anton-rs/kona) to understand eigenDA blobs, following the [kona book](https://anton-rs.github.io/kona/sdk/pipeline/providers.html#implementing-a-custom-data-availability-provider) recommendation (also see this [comment](https://github.com/anton-rs/kona/pull/862#issuecomment-2515038089)).

### Download SRS points
Hokulea host currently computes a challenge proof that validates the correctness of the eigenda blob against the provided kzg commitment. Such computation requires the host to have access to sufficient KZG SRS points.

### Running against devnet

First start the devnet:
Expand All @@ -17,4 +20,4 @@ cd bin/client
just run-client-native-against-devnet
```

![](./hokulea.jpeg)
![](./hokulea.jpeg)
10 changes: 10 additions & 0 deletions bin/client/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ edition = "2021"

[dependencies]
alloy-consensus.workspace = true
alloy-primitives.workspace = true
alloy-rlp.workspace = true

kona-client.workspace = true
kona-preimage.workspace = true
Expand All @@ -13,5 +15,13 @@ kona-driver.workspace = true
kona-executor.workspace = true

hokulea-proof.workspace = true
hokulea-eigenda.workspace = true

tracing.workspace = true
async-trait.workspace = true
rust-kzg-bn254.workspace = true
num.workspace = true

ark-bn254.workspace = true

ark-ff.workspace = true
136 changes: 136 additions & 0 deletions bin/client/src/cached_eigenda_provider.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
use alloy_primitives::Bytes;
use alloy_rlp::Decodable;
use async_trait::async_trait;
use kona_preimage::errors::PreimageOracleError;
use kona_preimage::CommsClient;

use hokulea_eigenda::BlobInfo;
use hokulea_eigenda::EigenDABlobProvider;
use hokulea_proof::eigenda_provider::OracleEigenDAProvider;
use kona_proof::errors::OracleProviderError;

use crate::witness::EigenDABlobWitness;

use num::BigUint;
use rust_kzg_bn254::blob::Blob;
use rust_kzg_bn254::kzg::KZG;

/// CachedOracleEigenDAProvider is a wrapper outside OracleEigenDAProvider. Its intended use
/// case is to fetch all eigenda blobs received during the derivation pipeline. So that it
/// is able to compute and cache the kzg witnesses, which can be verified inside ZKVM by checking
/// the point opening at the random Fiat Shamir evaluation index.
#[derive(Debug, Clone)]
pub struct CachedOracleEigenDAProvider<T: CommsClient> {
/// The preimage oracle client.
oracle: OracleEigenDAProvider<T>,
/// kzg proof witness
witness: EigenDABlobWitness,
Comment on lines +26 to +27
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add more details here explaining what this witness is doing, how it's used, etc.
Better to take 5 minutes now to save you 30 minutes in a few weeks once you've forgotten about this architecture.

}

impl<T: CommsClient> CachedOracleEigenDAProvider<T> {
/// Constructs a new oracle-backed EigenDA provider.
pub fn new(oracle: OracleEigenDAProvider<T>, witness: EigenDABlobWitness) -> Self {
Self { oracle, witness }
}
}

#[async_trait]
impl<T: CommsClient + Sync + Send> EigenDABlobProvider for CachedOracleEigenDAProvider<T> {
type Error = OracleProviderError;

async fn get_blob(&mut self, cert: &Bytes) -> Result<Bytes, Self::Error> {
let blob = self.oracle.get_blob(cert).await?;
let cert_blob_info = match BlobInfo::decode(&mut &cert[4..]) {
Ok(c) => c,
Err(_) => {
return Err(OracleProviderError::Preimage(PreimageOracleError::Other(
"does not contain header".into(),
)))
}
};

let output = self.compute_witness(&blob)?;
// make sure locally computed proof equals to returned proof from the provider
if output[..32] != cert_blob_info.blob_header.commitment.x[..]
|| output[32..64] != cert_blob_info.blob_header.commitment.y[..]
{
return Err(OracleProviderError::Preimage(PreimageOracleError::Other(
"proxy commitment is different from computed commitment proxy".into(),
)));
};

let commitment = Bytes::copy_from_slice(&output[..64]);

let kzg_proof = Bytes::copy_from_slice(&output[64..128]);

// push data into witness
self.witness
.write(blob.clone().into(), commitment, kzg_proof.into());

Ok(blob)
}
}

// nitro code https://github.com/Layr-Labs/nitro/blob/14f09745b74321f91d1f702c3e7bb5eb7d0e49ce/arbitrator/prover/src/kzgbn254.rs#L141
// could refactor in the future, such that both host and client can compute the proof
impl<T: CommsClient + Sync + Send> CachedOracleEigenDAProvider<T> {
/// Return Bytes array so that the host can reuse the code
fn compute_witness(&mut self, blob: &[u8]) -> Result<Vec<u8>, OracleProviderError> {
// TODO remove the need for G2 access
// Add command line to specify where are g1 and g2 path
// In the future, it might make sense to let the proxy to return such
// value, instead of local computation
let mut kzg = KZG::setup(
"resources/g1.32mb.point",
"",
"resources/g2.point.powerOf2",
268435456,
1024,
)
.map_err(|_| {
OracleProviderError::Preimage(PreimageOracleError::Other(
"does not contain header".into(),
))
})?;

let input = Blob::new(blob);
let input_poly = input.to_polynomial_eval_form();

kzg.data_setup_custom(1, input.len().try_into().unwrap())
.unwrap();

let mut commitment_and_proof = vec![0u8; 0];

let commitment = kzg.commit_eval_form(&input_poly).map_err(|_| {
OracleProviderError::Preimage(PreimageOracleError::Other("kzg.commit_eval_form".into()))
})?;

// TODO the library should have returned the bytes, or provide a helper
// for conversion. For both proof and commitment
let commitment_x_bigint: BigUint = commitment.x.into();
let commitment_y_bigint: BigUint = commitment.y.into();

self.append_left_padded_biguint_be(&mut commitment_and_proof, &commitment_x_bigint);
self.append_left_padded_biguint_be(&mut commitment_and_proof, &commitment_y_bigint);

let proof = kzg.compute_blob_proof(&input, &commitment).map_err(|_| {
OracleProviderError::Preimage(PreimageOracleError::Other(
"kzg.compute_blob_kzg_proof {}".into(),
))
})?;
let proof_x_bigint: BigUint = proof.x.into();
let proof_y_bigint: BigUint = proof.y.into();

self.append_left_padded_biguint_be(&mut commitment_and_proof, &proof_x_bigint);
self.append_left_padded_biguint_be(&mut commitment_and_proof, &proof_y_bigint);

Ok(commitment_and_proof)
}

pub fn append_left_padded_biguint_be(&self, vec: &mut Vec<u8>, biguint: &BigUint) {
let bytes = biguint.to_bytes_be();
let padding = 32 - bytes.len();
vec.extend(std::iter::repeat(0).take(padding));
vec.extend_from_slice(&bytes);
}
}
3 changes: 3 additions & 0 deletions bin/client/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ use tracing::{error, info};

use hokulea_proof::eigenda_provider::OracleEigenDAProvider;

pub mod cached_eigenda_provider;
pub mod witness;

#[inline]
pub async fn run<P, H>(oracle_client: P, hint_client: H) -> Result<(), FaultProofProgramError>
where
Expand Down
81 changes: 81 additions & 0 deletions bin/client/src/witness.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
use alloc::vec::Vec;
use alloy_primitives::Bytes;
use ark_bn254::{Fq, G1Affine};
use ark_ff::PrimeField;
use rust_kzg_bn254::blob::Blob;
use rust_kzg_bn254::kzg::KZG;
use tracing::info;

#[derive(Debug, Clone, Default)]
pub struct EigenDABlobWitness {
pub eigenda_blobs: Vec<Bytes>,
pub commitments: Vec<Bytes>,
pub proofs: Vec<Bytes>,
}

impl EigenDABlobWitness {
pub fn new() -> Self {
EigenDABlobWitness {
eigenda_blobs: Vec::new(),
commitments: Vec::new(),
proofs: Vec::new(),
}
}

pub fn write(&mut self, blob: Bytes, commitment: Bytes, proof: Bytes) {
self.eigenda_blobs.push(blob);
self.commitments.push(commitment);
self.proofs.push(proof);
info!("added a blob");
}

pub fn verify(&self) -> bool {
// TODO we should not need so many g1 and g2 points for kzg verification
// improve kzg library instead
let kzg = match KZG::setup(
"resources/g1.32mb.point",
"",
"resources/g2.point.powerOf2",
268435456,
1024,
) {
Ok(k) => k,
Err(e) => panic!("cannot setup kzg {}", e),
};

info!("lib_blobs len {:?}", self.eigenda_blobs.len());

// transform to rust-kzg-bn254 inputs types
// TODO should make library do the parsing the return result
let lib_blobs: Vec<Blob> = self.eigenda_blobs.iter().map(|b| Blob::new(b)).collect();
let lib_commitments: Vec<G1Affine> = self
.commitments
.iter()
.map(|c| {
let x = Fq::from_be_bytes_mod_order(&c[..32]);
let y = Fq::from_be_bytes_mod_order(&c[32..64]);
G1Affine::new(x, y)
})
.collect();
let lib_proofs: Vec<G1Affine> = self
.proofs
.iter()
.map(|p| {
let x = Fq::from_be_bytes_mod_order(&p[..32]);
let y = Fq::from_be_bytes_mod_order(&p[32..64]);

G1Affine::new(x, y)
})
.collect();
let pairing_result = kzg
.verify_blob_kzg_proof_batch(&lib_blobs, &lib_commitments, &lib_proofs)
.unwrap();

//info!("lib_blobs {:?}", lib_blobs);
//info!("lib_commitments {:?}", lib_commitments);
//info!("lib_proofs {:?}", lib_proofs);
//info!("pairing_result {:?}", pairing_result);

pairing_result
}
}
21 changes: 10 additions & 11 deletions bin/host/src/eigenda_fetcher/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -156,15 +156,15 @@ where
let cert_blob_info = BlobInfo::decode(&mut &item_slice[4..]).unwrap();

// Proxy should return a cert whose data_length measured in symbol (i.e. 32 Bytes)
let blob_length = cert_blob_info.blob_header.data_length as u64;
warn!("blob length: {:?}", blob_length);
let data_length = cert_blob_info.blob_header.data_length as u64;
warn!("data length: {:?}", data_length);

let eigenda_blob = EigenDABlobData::encode(rollup_data.as_ref());

if eigenda_blob.blob.len() != blob_length as usize * BYTES_PER_FIELD_ELEMENT {
if eigenda_blob.blob.len() != data_length as usize * BYTES_PER_FIELD_ELEMENT {
return Err(
anyhow!("data size from cert does not equal to reconstructed data codec_rollup_data_len {} blob size {}",
eigenda_blob.blob.len(), blob_length as usize * BYTES_PER_FIELD_ELEMENT));
eigenda_blob.blob.len(), data_length as usize * BYTES_PER_FIELD_ELEMENT));
}

// Write all the field elements to the key-value store.
Expand All @@ -176,9 +176,9 @@ where
blob_key[..32].copy_from_slice(cert_blob_info.blob_header.commitment.x.as_ref());
blob_key[32..64].copy_from_slice(cert_blob_info.blob_header.commitment.y.as_ref());

trace!("cert_blob_info blob_length {:?}", blob_length);
trace!("cert_blob_info data_length {:?}", data_length);

for i in 0..blob_length {
for i in 0..data_length {
blob_key[88..].copy_from_slice(i.to_be_bytes().as_ref());
let blob_key_hash = keccak256(blob_key.as_ref());

Expand All @@ -192,20 +192,19 @@ where
)?;
}

// TODO proof is at the random point, but we need to figure out where to generate
//
// TODO currenlty proof is only computed in the client side if cached_eigenda_provider
// is used. We can add this back, if hosts needs to get the proof.
// Write the KZG Proof as the last element, needed for ZK
//blob_key[88..].copy_from_slice((blob_length).to_be_bytes().as_ref());
//blob_key[88..].copy_from_slice((data_length).to_be_bytes().as_ref());
//let blob_key_hash = keccak256(blob_key.as_ref());

//kv_write_lock.set(
// PreimageKey::new(*blob_key_hash, PreimageKeyType::Keccak256).into(),
// blob_key.into(),
//)?;
// proof to be done
//kv_write_lock.set(
// PreimageKey::new(*blob_key_hash, PreimageKeyType::GlobalGeneric).into(),
// [1, 2, 3].to_vec(),
// output[64..].to_vec(),
//)?;
} else {
panic!("Invalid hint type: {hint_type}. FetcherWithEigenDASupport.prefetch only supports EigenDACommitment hints.");
Expand Down
3 changes: 0 additions & 3 deletions crates/eigenda/src/traits.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,4 @@ pub trait EigenDABlobProvider {

/// Fetches a blob.
async fn get_blob(&mut self, cert: &Bytes) -> Result<Bytes, Self::Error>;

/// Fetches an element from a blob.
async fn get_element(&mut self, cert: &Bytes, element: &Bytes) -> Result<Bytes, Self::Error>;
}
Loading
Loading