Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute and provide proof at client #24

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 5 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,17 @@ reqwest = "0.12.12"
async-trait = "0.1.85"
linked_list_allocator = "0.10.5"
bytes = "1.9.0"
num = "0.4"

# General
sha2 = { version = "0.10.8", default-features = false }
c-kzg = { version = "2.0.0", default-features = false }
anyhow = { version = "1.0.95", default-features = false }
thiserror = { version = "2.0.9", default-features = false }
rust-kzg-bn254 = { version = "0.2.1", default-features = false }
rust-kzg-bn254 = { git = "https://github.com/Layr-Labs/rust-kzg-bn254", rev = "4ad14ea4ce9473e13ed6437140fcbbff3a8ccce1", default-features = false }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should talk with anup about publishing new versions of the rust-kzg-bn254 lib so we don't need to rely on git dependencies like this (they tend to cause more dependency problems in my experience)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to have all imporant issues resolved


ark-bn254 = "0.5.0"
ark-ff = { version = "0.5.0", features = ["parallel"] }

# Tracing
tracing-loki = "0.2.5"
Expand Down
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@

Hokulea is a library to provide the altda providers for a derivation pipeline built with [kona](https://github.com/anton-rs/kona) to understand eigenDA blobs, following the [kona book](https://anton-rs.github.io/kona/sdk/pipeline/providers.html#implementing-a-custom-data-availability-provider) recommendation (also see this [comment](https://github.com/anton-rs/kona/pull/862#issuecomment-2515038089)).

### Download SRS points
Hokulea host currently computes a challenge proof that validates the correctness of the eigenda blob against the provided kzg commitment. Such computation requires the host to have access to sufficient KZG SRS points.

### Running against devnet

First start the devnet:
Expand All @@ -17,4 +20,4 @@ cd bin/client
just run-client-native-against-devnet
```

![](./hokulea.jpeg)
![](./hokulea.jpeg)
11 changes: 11 additions & 0 deletions bin/client/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ edition = "2021"

[dependencies]
alloy-consensus.workspace = true
alloy-primitives.workspace = true
alloy-rlp.workspace = true

kona-client.workspace = true
kona-preimage.workspace = true
Expand All @@ -13,5 +15,14 @@ kona-driver.workspace = true
kona-executor.workspace = true

hokulea-proof.workspace = true
hokulea-eigenda.workspace = true

tracing.workspace = true
async-trait.workspace = true
rust-kzg-bn254.workspace = true
num.workspace = true
anyhow.workspace = true

ark-bn254.workspace = true

ark-ff.workspace = true
136 changes: 136 additions & 0 deletions bin/client/src/cached_eigenda_provider.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
use kona_preimage::CommsClient;
use kona_preimage::errors::PreimageOracleError;
use alloy_primitives::Bytes;
use async_trait::async_trait;
use alloy_rlp::Decodable;

use kona_proof::errors::OracleProviderError;
use hokulea_proof::eigenda_provider::OracleEigenDAProvider;
use hokulea_eigenda::EigenDABlobProvider;
use hokulea_eigenda::BlobInfo;

use crate::witness::EigenDABlobWitness;

use rust_kzg_bn254::kzg::KZG;
use rust_kzg_bn254::blob::Blob;
use num::BigUint;


#[derive(Debug, Clone)]
pub struct CachedOracleEigenDAProvider<T: CommsClient> {
/// The preimage oracle client.
oracle: OracleEigenDAProvider<T>,
/// kzg proof witness
witness: EigenDABlobWitness,
Comment on lines +26 to +27
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add more details here explaining what this witness is doing, how it's used, etc.
Better to take 5 minutes now to save you 30 minutes in a few weeks once you've forgotten about this architecture.

}

impl<T: CommsClient> CachedOracleEigenDAProvider<T> {
/// Constructs a new oracle-backed EigenDA provider.
pub fn new(oracle: OracleEigenDAProvider<T>, witness: EigenDABlobWitness) -> Self {
Self { oracle, witness }
}
}

#[async_trait]
impl <T: CommsClient + Sync + Send> EigenDABlobProvider for CachedOracleEigenDAProvider<T> {
type Error = OracleProviderError;

async fn get_blob(&mut self, cert: &Bytes) -> Result<Bytes, Self::Error> {
match self.oracle.get_blob(cert).await {
Ok(b) => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is b? use a more explicit name

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blob

let item_slice = cert.as_ref();
let cert_blob_info = match BlobInfo::decode(&mut &item_slice[4..]) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let item_slice = cert.as_ref();
let cert_blob_info = match BlobInfo::decode(&mut &item_slice[4..]) {
let cert_blob_info = match BlobInfo::decode(&mut &cert[4..]) {

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto-dereferencing is your friend

Ok(c) => c,
Err(_) => return Err(OracleProviderError::Preimage(
PreimageOracleError::Other("does not contain header".into(),
))),
};

let output = self.compute_and_save_witness(&b)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't use generic names like "output". better to use self-documenting names, makes the code much more easily readable.

// make sure locally computed proof equals to returned proof from the provider
if output[..32] != cert_blob_info.blob_header.commitment.x[..] ||
output[32..64] != cert_blob_info.blob_header.commitment.y[..]{
return Err(OracleProviderError::Preimage(
PreimageOracleError::Other("proxy commitment is different from computed commitment proxy".into())));
};

let commitment = Bytes::copy_from_slice(
&[cert_blob_info.blob_header.commitment.x, cert_blob_info.blob_header.commitment.y]
.concat());

let kzg_proof = Bytes::copy_from_slice(&output[64..128]);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you use &output[..64] for the commitment given you've already verified it above?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes


// push data into witness
self.witness.write(b.clone().into(), commitment, kzg_proof.into());

Ok(b.into())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Ok(b.into())
Ok(b)

},
Err(e) => Err(e),
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use ? operator instead

}
}


// nitro code https://github.com/Layr-Labs/nitro/blob/14f09745b74321f91d1f702c3e7bb5eb7d0e49ce/arbitrator/prover/src/kzgbn254.rs#L30
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is pointing to a load_directory_with_prefix. Is that really what you wanted to point to?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

impl<T: CommsClient + Sync + Send> CachedOracleEigenDAProvider<T> {
fn compute_and_save_witness(&mut self, blob: &[u8]) -> Result<Vec<u8>, OracleProviderError> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are not saving the witness in this function. change name

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haha

// TODO remove the need for G2 access
// Add command line to specify where are g1 and g2 path
// In the future, it might make sense to let the proxy to return such
// value, instead of local computation
let mut kzg = match KZG::setup(
"resources/g1.32mb.point",
"",
"resources/g2.point.powerOf2",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this work? I thought the client doesn't have access to the fs? What if its running inside a zkvm..?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this code only runs natively to fetch all witnesses you would need. For verification, the kzg library should just provide a helper function

268435456,
1024,
) {
Ok(k) => k,
Err(_) => return Err(OracleProviderError::Preimage(
PreimageOracleError::Other("does not contain header".into(),
))),
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing these everywhere to use something like

.map_err(|_| OracleProviderError::Preimage(
    PreimageOracleError::Other("does not contain header".into())
))?

will be cleaner


let input = Blob::new(blob);
let input_poly = input.to_polynomial_eval_form();

kzg.data_setup_custom(1, input.len().try_into().unwrap()).unwrap();

let mut output = vec![0u8; 0];

let commitment = match kzg.commit_eval_form(&input_poly) {
Ok(c) => c,
Err(_) => return Err(OracleProviderError::Preimage(PreimageOracleError::Other(
"kzg.commit_eval_form".into()))),
};

// TODO the library should have returned the bytes, or provide a helper
// for conversion. For both proof and commitment
let commitment_x_bigint: BigUint = commitment.x.into();
let commitment_y_bigint: BigUint = commitment.y.into();

self.append_left_padded_biguint_be(&mut output, &commitment_x_bigint);
self.append_left_padded_biguint_be(&mut output, &commitment_y_bigint);

let proof = match kzg.compute_blob_proof(&input, &commitment) {
Ok(p) => p,
Err(_) => return Err(OracleProviderError::Preimage(PreimageOracleError::Other(
"kzg.compute_blob_kzg_proof {}".into()))),
};
let proof_x_bigint: BigUint = proof.x.into();
let proof_y_bigint: BigUint = proof.y.into();

self.append_left_padded_biguint_be(&mut output, &proof_x_bigint);
self.append_left_padded_biguint_be(&mut output, &proof_y_bigint);

Ok(output)
}

pub fn append_left_padded_biguint_be(&self, vec: &mut Vec<u8>, biguint: &BigUint) {
let bytes = biguint.to_bytes_be();
let padding = 32 - bytes.len();
vec.extend(std::iter::repeat(0).take(padding));
vec.extend_from_slice(&bytes);
}

}
3 changes: 3 additions & 0 deletions bin/client/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ use tracing::{error, info};

use hokulea_proof::eigenda_provider::OracleEigenDAProvider;

pub mod cached_eigenda_provider;
pub mod witness;

#[inline]
pub async fn run<P, H>(oracle_client: P, hint_client: H) -> Result<(), FaultProofProgramError>
where
Expand Down
73 changes: 73 additions & 0 deletions bin/client/src/witness.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
use alloy_primitives::Bytes;
use alloc::vec::Vec;
use rust_kzg_bn254::kzg::KZG;
use rust_kzg_bn254::blob::Blob;
use ark_bn254::{G1Affine, Fq};
use ark_ff::PrimeField;
use tracing::info;

#[derive(Debug, Clone)]
pub struct EigenDABlobWitness {
pub eigenda_blobs: Vec<Bytes>,
pub commitments: Vec<Bytes>,
pub proofs: Vec<Bytes>,
}

impl EigenDABlobWitness {
pub fn new() -> Self {
EigenDABlobWitness {
eigenda_blobs: Vec::new(),
commitments: Vec::new(),
proofs: Vec::new(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any idea ahead of time as to how many blobs we will be reading? Would be useful to preallocate some capacity here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, a derivation pipeline could be arbitrarily long, and it is stack dependent

}
}

pub fn write(&mut self, blob: Bytes, commitment: Bytes, proof: Bytes) {
self.eigenda_blobs.push(blob);
self.commitments.push(commitment);
self.proofs.push(proof);
info!("added a blob");
}

pub fn verify(&self) -> bool {
// TODO we should have to specify the details to get a kzg to perform a verification
let kzg = match KZG::setup(
"resources/g1.32mb.point",
"",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this comment mean?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not need so many g1 and g2 points for kzg verification

"resources/g2.point.powerOf2",
268435456,
1024,
) {
Ok(k) => k,
Err(e) => panic!("cannot setup kzg {}", e),
};


info!("lib_blobs len {:?}", self.eigenda_blobs.len());

// transform to rust-kzg-bn254 inputs types
// TODO should make library do the parsing the return result
let lib_blobs: Vec<Blob> = self.eigenda_blobs.iter().map(|b| Blob::new(b)).collect();
let lib_commitments: Vec<G1Affine> = self.commitments.iter().map(|c| {
let x = Fq::from_be_bytes_mod_order(&c[..32]);
let y = Fq::from_be_bytes_mod_order(&c[32..64]);
G1Affine::new(x, y)
}).collect();
let lib_proofs: Vec<G1Affine> = self.proofs.iter().map(|p| {
let x = Fq::from_be_bytes_mod_order(&p[..32]);
let y = Fq::from_be_bytes_mod_order(&p[32..64]);

G1Affine::new(x, y)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does G1Affine not have a from function that we could use directly?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not I can find. It is also used by the kzg-bn254 library

}).collect();
let pairing_result = kzg
.verify_blob_kzg_proof_batch(&lib_blobs, &lib_commitments, &lib_proofs)
.unwrap();

//info!("lib_blobs {:?}", lib_blobs);
//info!("lib_commitments {:?}", lib_commitments);
//info!("lib_proofs {:?}", lib_proofs);
//info!("pairing_result {:?}", pairing_result);

return pairing_result
}
}
1 change: 1 addition & 0 deletions bin/host/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ reqwest.workspace = true
async-trait.workspace = true
tokio = { workspace = true, features = ["full"] }
clap = { workspace = true, features = ["derive", "env"] }
num.workspace = true

[dev-dependencies]
proptest.workspace = true
12 changes: 7 additions & 5 deletions bin/host/src/eigenda_fetcher/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -192,20 +192,20 @@ where
)?;
}

// TODO proof is at the random point, but we need to figure out where to generate
//
// proof is at the random point
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this comment? What does it mean? Is it talking about the length proof? Which random point is it evaluated at? Need more context here.

// Write the KZG Proof as the last element, needed for ZK
//blob_key[88..].copy_from_slice((blob_length).to_be_bytes().as_ref());
//let blob_key_hash = keccak256(blob_key.as_ref());
blob_key[88..].copy_from_slice((blob_length).to_be_bytes().as_ref());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a KZG proof? Isn't blob_length just... a blob length? Should we be using the length_proof instead?
image

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is datalength



//let blob_key_hash = keccak256(blob_key.as_ref());
//kv_write_lock.set(
// PreimageKey::new(*blob_key_hash, PreimageKeyType::Keccak256).into(),
// blob_key.into(),
//)?;
// proof to be done
//kv_write_lock.set(
// PreimageKey::new(*blob_key_hash, PreimageKeyType::GlobalGeneric).into(),
// [1, 2, 3].to_vec(),
// output[64..].to_vec(),
//)?;
} else {
panic!("Invalid hint type: {hint_type}. FetcherWithEigenDASupport.prefetch only supports EigenDACommitment hints.");
Expand All @@ -219,4 +219,6 @@ where

Ok(())
}


}
3 changes: 0 additions & 3 deletions crates/eigenda/src/traits.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,4 @@ pub trait EigenDABlobProvider {

/// Fetches a blob.
async fn get_blob(&mut self, cert: &Bytes) -> Result<Bytes, Self::Error>;

/// Fetches an element from a blob.
async fn get_element(&mut self, cert: &Bytes, element: &Bytes) -> Result<Bytes, Self::Error>;
}
Loading
Loading