Skip to content

feat: enable post-capella HistoricalSummary Header validation #1774

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

KolbyML
Copy link
Member

@KolbyML KolbyML commented Apr 22, 2025

What was wrong?

HistoricalSummary HeaderWithProof validation wasn't enabled

How was it fixed?

Enabled it, the E2HS bridge has no way to verify post-capella HeaderWithProofs as it doesn't have access to a provider and we don't want to give it access to one. So the assumption is the the E2HS bridge is downloading the files from a trusted source.

For this to work, it requires the BeaconNetwork or validation will fail

@KolbyML KolbyML requested review from ogenev and morph-dev April 22, 2025 22:18
@KolbyML KolbyML self-assigned this Apr 22, 2025
@KolbyML KolbyML force-pushed the capella-proofs branch 3 times, most recently from 4dfa0bb to 4f2dd61 Compare April 23, 2025 01:26
Copy link
Collaborator

@morph-dev morph-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few comments. Would like @ogenev to take a look as well.

/// The historical summaries are provided by the header oracle.
HeaderOracle(Arc<RwLock<HeaderOracle>>),
/// The historical summaries are provided by passed historical summaries.
HistoricalSummaries(HistoricalSummaries),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How exactly is this used? Only in tests or?
Can/should we make this a collection?
Either HashMap<u64, HistoricalSummaries> or Vec<(u64, HistoricalSummaries)> (the u64 would be epoch).

Copy link
Member Author

@KolbyML KolbyML Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

        let historical_summaries = HistoricalSummaries::from_ssz_bytes(&historical_summaries_bytes)
            .expect("cannot decode HistoricalSummaries bytes");

I don't think we should/need to make this a collect, as HistoricalSummaries are indexed by 0, they are also only like 40k bytes, super tiny in terms of test data we will add. So I don't think we need to hyper optimize this to reduce our test data sizes, as this is a drop in the bucket compared to our real offenders. We also only need to store 1 HistoricalSummaries for all of our HistoricalSummaries tests, we would just update it over time

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm not familiar with how exactly historical summaries work, so correctly me if I'm wrong.

The get_historical_summary function accepts epoch for a reason. So whatever historical summaries we use, we should know that they are valid only up to some slot.

After reading a bit about it, I agree that we probably don't need collection. But I would still include epoch.
I would personally prefer: HistoricalSummaries { epoch: u64, historical_summaries: HistoricalSummaries } but I'm not against simple HistoricalSummaries(u64, HistoricalSummaries(.

This way, the get_historical_summary can return descriptive error, which should be much better than what would happen otherwise (I guess error while checking merkle proof).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why we would need to return the epoch. get_historical_summary()) returns the HistoricalSummaries for the given epoch or a more up to date HistoricalSummaries. HistoricalSummaries are already proven, get_historical_summary() will error if it can't get the HistoricalSummaries of at least the given epoch.

Also I am a little confused what you are referring to now.

  • what HistoricalSummariesProvider::get_historical_summary() returns? I don't think it makes sense to return it here as if we got the epoch we know it contains the expected data for that epoch
  • what HeaderOracle::get_historical_summary() returns? For our usecase it doesn't matter what epoch of HistoricalSummaries the database returns, as it either returns the requested version or a more up to date version

header_validator
.validate_header_with_proof(
&hwp,
HistoricalSummariesProvider::HeaderOracle(Arc::new(RwLock::new(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: If I'm not mistaken, since this is pre-merge, this can be HistoricalSummariesProvider::None

.validate_header_with_proof(&header_with_proof)
.validate_header_with_proof(
&header_with_proof,
HistoricalSummariesProvider::HeaderOracle(Arc::new(RwLock::new(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also here

.validate_header_with_proof(&hwp)
.validate_header_with_proof(
&hwp,
HistoricalSummariesProvider::HeaderOracle(Arc::new(RwLock::new(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here

.validate_header_with_proof(&future_hwp)
.validate_header_with_proof(
&future_hwp,
HistoricalSummariesProvider::HeaderOracle(Arc::new(RwLock::new(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here

@@ -153,15 +201,21 @@ impl HeaderValidator {
let gen_index = EPOCH_SIZE + block_root_index;
let historical_summary_index =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that this index works both pre and post Dencun? Do we have tests that cover all forks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that this index works both pre and post Dencun?

The index calculation starts at the epoch of the CAPELLA_FORK_EPOCH, so you are basically just calculating 8192 slot increments from the fork. The intervals of HistoricalSummaries remain constant from this point.

Do we have tests that cover all forks?

We don't have proof generation yet, I am assuming that is something you would need to add over time

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I marked wrong lines.

I think that calling Self::verify_beacon_block_proof few lines above wouldn't work correctly. It even says that it's for pre-capella. This is primary reason why I want @ogenev to take a look at this, as I believe he originally implemented it.


And I'm not comfortable with us shipping code that doesn't have a single test for all forks. I would rather wait with this PR until we have those tests vectors and we can do it properly.

Copy link
Member Author

@KolbyML KolbyML Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that calling Self::verify_beacon_block_proof few lines above wouldn't work correctly. It even says that it's for pre-capella. This is primary reason why I want @ogenev to take a look at this, as I believe he originally implemented it.

https://github.com/ethereum/trin/pull/1748/files#diff-06484484ad3bff928c828d2fc2013746b289cb23a51c53c42bab5a13966f87a9R105-R123

The comment is just out of date, the beacon_block_body_root_proof doesn't change between forks as since in the PR lines linked above, the beacon_block_body_root_proof proof is the same format for bellatrix, capella, deneb

Copy link
Member Author

@KolbyML KolbyML Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am looking forward to @ogenev review as well.

And I'm not comfortable with us shipping code that doesn't have a single test for all forks.

We already have tests for Capella. ^ I think it is unreasonable to expect tests for all forks currently and this request is a little unreasonable.

I would rather wait with this PR until we have those tests vectors and we can do it properly.

I don't think it would make a difference, as if there was a mistake Validation would fail and that would be our worst case, the best case is things just work. We have been waiting for a lot of this code for quite some time. We can always fix things in the future, so I think this fear is unwarranted in these specific circumstances.

Currently we have nothing, so there is only one place to go and that is up. The world won't end, we will have tests for all forks before a user even touches Trin.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me and Milos talked in dm's. He didn't see we already have tests for capella, I will double check the tests, and see if I can use the test data from ethereum/portal-spec-tests#45 to test deneb, Milos didn't care if he had tests for pectra yet.

I will do this and ping when I have some time

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that calling Self::verify_beacon_block_proof few lines above wouldn't work correctly. It even says that it's for pre-capella. This is primary reason why I want @ogenev to take a look at this, as I believe he originally implemented it.

This should work for post-capella (including pectra) because the indexes don't change, it verifies that the execution block header is included in the beacon block.

However, we need to add some tests before pushing to prod.

pub async fn validate_header_with_proof(
&self,
header_with_proof: &HeaderWithProof,
historical_summaries_provider: HistoricalSummariesProvider,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we need to pass HistoricalSummariesProvider here, instead of in constructor?

If I'm not mistaken, real usage will use Oracle, tests would use either constant or none, and bridge would use none. But none of these cases require different value between calls.

@KolbyML
Copy link
Member Author

KolbyML commented Apr 23, 2025

@morph-dev ready for another look. @ogenev do you think you will be able to review this PR before you go on leave?

@KolbyML KolbyML requested a review from morph-dev April 23, 2025 16:54
HistoricalSummaries(HistoricalSummaries),
/// We don't have historical summaries, for example the E2HS bridge doesn't use a provider and
/// hence can't get HistoricalSummaries. HistoricalSummary proof validation will fail.
#[default]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be default. Probably this type shouldn't implement Default.

The same applies for HeaderValidator. Few places that currently use default HeaderValidator should explicitly create one with HistoricalSummariesProvider::None.

If you want "easy" way to create HeaderValidator without HistoricalSummariesProvider, then you can have constructor that would be documented: HeaderValidator::new_without_historical_summaries().

true => Some(Arc::new(
lookup_epoch_acc(
epoch_index,
&HeaderValidator::default().pre_merge_acc,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we just use PreMergeAccumulator::default()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HeaderValidator in the future would need to support multiple networks, PreMergeAccumulator currently only supports mainnet, but I don't see why we wouldn't add sepolia in the future

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make the recommended change for the time being

/// The historical summaries are provided by the header oracle.
HeaderOracle(Arc<RwLock<HeaderOracle>>),
/// The historical summaries are provided by passed historical summaries.
HistoricalSummaries(HistoricalSummaries),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm not familiar with how exactly historical summaries work, so correctly me if I'm wrong.

The get_historical_summary function accepts epoch for a reason. So whatever historical summaries we use, we should know that they are valid only up to some slot.

After reading a bit about it, I agree that we probably don't need collection. But I would still include epoch.
I would personally prefer: HistoricalSummaries { epoch: u64, historical_summaries: HistoricalSummaries } but I'm not against simple HistoricalSummaries(u64, HistoricalSummaries(.

This way, the get_historical_summary can return descriptive error, which should be much better than what would happen otherwise (I guess error while checking merkle proof).

@@ -153,15 +201,21 @@ impl HeaderValidator {
let gen_index = EPOCH_SIZE + block_root_index;
let historical_summary_index =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I marked wrong lines.

I think that calling Self::verify_beacon_block_proof few lines above wouldn't work correctly. It even says that it's for pre-capella. This is primary reason why I want @ogenev to take a look at this, as I believe he originally implemented it.


And I'm not comfortable with us shipping code that doesn't have a single test for all forks. I would rather wait with this PR until we have those tests vectors and we can do it properly.

Copy link
Member

@ogenev ogenev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did an initial review, other than some naming comments I don't see any major issues.

However, we should add deneb tests before merging to make sure that everything works as expected (those validation has not been tested on Deneb but in theory should work).

@@ -128,13 +183,11 @@ impl HeaderValidator {
}

/// A method to verify the chain of proofs for post-Capella execution headers.
#[allow(dead_code)] // TODO: Remove this when used
fn verify_post_capella_header(
async fn verify_historical_summaries(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused by this name, the goal here is to verify the post-capella execution header against historical summaries, not to verify the historical summaries?

}
}
}

/// A method to verify the chain of proofs for post-merge/pre-Capella execution headers.
fn verify_post_merge_pre_capella_header(
fn verify_historical_roots(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is confusing, we don't verify historical roots, we verify the header against historical roots.

@@ -153,15 +201,21 @@ impl HeaderValidator {
let gen_index = EPOCH_SIZE + block_root_index;
let historical_summary_index =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that calling Self::verify_beacon_block_proof few lines above wouldn't work correctly. It even says that it's for pre-capella. This is primary reason why I want @ogenev to take a look at this, as I believe he originally implemented it.

This should work for post-capella (including pectra) because the indexes don't change, it verifies that the execution block header is included in the beacon block.

However, we need to add some tests before pushing to prod.

@ogenev
Copy link
Member

ogenev commented Apr 24, 2025

@morph-dev ready for another look. @ogenev do you think you will be able to review this PR before you go on leave?

Yeah, even if I go to leave sooner, I will find time to review it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants