Skip to content

Conversation

@jimjbrettj
Copy link

Motivation

Draft PR to show what I have currently for mocking inherent data and digests to support assethub forking

Solution

PR Checklist

  • Added Tests
  • Added Documentation
  • Breaking changes

@alindima
Copy link

This includes a bunch of unrelated commits, making it very difficult to review. Please only add your changes

@jimjbrettj
Copy link
Author

This includes a bunch of unrelated commits, making it very difficult to review. Please only add your changes

updated

Ok(top_map)
}

fn build_forked_chainspec_from_raw_top(
Copy link

@iulianbarbu iulianbarbu Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to double check, since I think we might be able to simplify the setup here. Please note that I lack yet lots of context, so lmk if there are some existing decisions/assumptions my suggestions break.

  1. I suspect we might use an outdated chain spec, when fetched over the network. Something like https://github.com/paritytech/polkadot-sdk/blob/master/cumulus/parachains/chain-specs/asset-hub-westend.json, which tbh, I doubt it points to latest version of assethub-westend. We can generate it based on the latest code in the runtime, with staging-chain-spec-builder: https://crates.io/crates/staging-chain-spec-builder.

Build the runtime:

cargo build -p asset-hub-westend-runtime

And then:

chain-spec-builder create --raw-storage --relay-chain "westend-local" --runtime \
    target/release/wbuild/asset-hub-westend-runtime/asset_hub_westend_runtime.wasm named-preset development
  1. At this moment we build the chain spec by fetching it over the network from an existing node. Can't we use the genesis chain spec generated like in 1 instead? Is it helpful for forking case to have a few blocks built and only that storage be used? Or maybe having the storage like at genesis means some keys are not initialized? Either way, that would be an interesting state to fork from, but yeh, maybe it is worth to work first on some state which has more keys initialized.

cc @alindima @jimjbrettj

.expect("Header lookup should succeed")
.expect("Header passed in as parent should be present in backend.");

// // NOTE: Our runtime API doesnt seem to have collect_collation_info available

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is because we fork from a genesis assethub, which missed a few of the current runtime APIs. This comment: https://github.com/paritytech/foundry-polkadot/pull/399/files#r2524212716 suggests a way which should make this runtime API available.

// here assumes that we are using the aura-ext consensushook in the parachain
// runtime.
// Note: Taken from https://github.com/paritytech/polkadot-sdk/issues/7341, but unsure fi needed or not
let requires_relay_progress = client

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need this. I am comparing against latest polkadot-sdk master: https://github.com/paritytech/polkadot-sdk/blob/master/cumulus/polkadot-omni-node/lib/src/nodes/aura.rs#L400.

struct MockTimestampInherentDataProvider;

impl MockTimestampInherentDataProvider {
fn advance_timestamp(slot_duration: u64) {
Copy link

@iulianbarbu iulianbarbu Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Advancing time should factor in a few other things.

  1. anvil-polkadot's time manager: it will account for the time set by the user via RPC, or at genesis.
  2. the checks in the runtime, which requires us to set up the scene for block proposing and importing:
    i) when creating a timestamp inherent, we should give it a timestamp from the time manager, with next_timestamp.
    ii) above point is not enough when proposing the block and importing it, since at those times this and this happens. It means that right before sending the command to seal a new block we should adjust pallet-aura's CurrentSlot storage item to be in accordance with the timestamp inherent. I think it should be done somewhere before we send a command here: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/aura/src/lib.rs#L400-L406. To do it in the mining engine, we'd need to use state injection, so we'd need to access a BackendWithOverlay shared with the ApiServer. We can pass it to the MiningEngine somewhere around here, or somewhere at the start of the node where is initialized.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For overriding the storage item for CurrentSlot do you think I can use the MockValidationDataInherentDataProvider.additional_key_values similar to how moonbeam does it here to achieve this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that might do it. I was thinking to use a certain feature we already have in anvil, which might update the storage in a similar fashion, before the block proposing/import: https://github.com/paritytech/foundry-polkadot/blob/master/crates/anvil-polkadot/src/substrate_node/service/backend.rs#L168-L171. The example I linked is for a method used to inject custom timestamps into the pallet-timestamp storage (especially useful when time is modified by users at genesis or via RPC, while the node is running). Looking throughout the code base for the function name should provide a rough idea about how to use it, or how to add something new to the BackendWithOverlay (e.g. something to modify the current slot).

BUT, it feels like your suggestion looks more contained (as in we imply we want the current slot modified at the same time we mock the timestamp/validation_data inherents), so I would give it a shot too and test against it.

Copy link

@skunert skunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. I think its going in the right direction, but slot duration and para id need to be read from the runtime.


//let para_id = Id::new(anvil_config.get_chain_id().try_into().unwrap());

let para_id = Id::new(420420421);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These chains all have the GetParachainInfo runtime api. So you should be able to just do client.runtime_api().parachain_id().

https://github.com/polkadot-fellows/runtimes/blob/c1a537c4a2b2e55624c491703f5ebbc8c8f33888/system-parachains/asset-hubs/asset-hub-kusama/src/lib.rs#L2590

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe one problem you will face will be that this is not available because our internal substrate runtime does not implement it. In that case we need to introduce a fake_runtime_api here like in omni-node. This is done here for example here. The point of this is that the client does not know which runtime APIs are available in the WASM blob. So the fake runtime API tells is which ones are available.

If the WASM blob in the end does not contain it, it will error.

Copy link

@iulianbarbu iulianbarbu Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These chains all have the GetParachainInfo runtime api. So you should be able to just do

They should have it when forking from a closer to present times state. If we use asset-hub-westend closer to genesis this will probably fail, right? We still need to fork a present day assethub-westend to be able to actually test this (although older forked state is also relevent for correct handling), which has most APIs implemented. Curious if this makes sense to you: #399 (comment). Other than this, a FakeRuntimeApi definitely makes sense.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, but I can not see any use case for forking super old-old asset-hub state. For now it seems like a good compromise to make.

Regarding the code you linked, Alin told me we are going ot use the lazy loading backend and that chain-spec stuff is temporary anyway.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes indeed, I am just waiting till a PR has been opened with lazy loading forking, then i base my work off of that

// }
// };

let slot_duration= sc_consensus_aura::SlotDuration::from_millis(6000);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should absolutely fetch this from the runtime via client.runtime_api().slot_duration(). Alternatively, you can directly pass the client to the AuraConsensusDataProvider below.

Copy link

@iulianbarbu iulianbarbu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks like there is some progress in terms mocking inherents.

// Mine /// Consensus data provider for Aura. This allows to use manual-seal driven nodes to author valid
/// AURA blocks. It will inspect incoming [`InherentData`] and look for included timestamps. Based
/// on these timestamps, the [`AuraConsensusDataProvider`] will emit fitting digest items.
pub struct AuraConsensusDataProvider<B, P> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}

impl cumulus_primitives_core::GetParachainInfo<Block> for Runtime {
fn parachain_id() -> ParaId {
Copy link

@iulianbarbu iulianbarbu Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was stuff not compiling if not adding this implementation here? Might be because substrate-runtime fills the place of the FakeRuntimeApi suggested by Sebastian earlier. We'll probably need to make the substrate-runtime closer to asset-hub-westend lets say, and make it look like a parachain runtime, but it would be great to have changes in the PR only for what's required (for ease of review). I think it would be best to separate the unrequired substrate-runtime changes from this PR, but we'll probably follow up in a context separated from forking feature.

I think we'd need soon to update feature/forking to latest foundry-polkadot/master and then rebase your branch on top of it, for ease of review. At the same time, can you also let us know how you setup the environment for testing this? It should be useful for debugging/testing things while reviwing, having exact setup as yours.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it appears to not compile without this addition.

This makes sense to me, and I can bring it up in my meeting today to see my next steps. This PR is just a temporary one to get feedback, and since we are close to having a PR opened with our lazy loading forking I am hoping to build this on top of that to make the diffs much cleaner

Copy link

@iulianbarbu iulianbarbu Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and since we are close to having a PR opened with our lazy loading forking I am hoping to build this on top of that to make the diffs much cleaner

Do you mean this PR: #392, which is already opened? Or you wanted to say that you wait for its merging - because its merging might take a while, and I thought we try to hack our way into forking, from an AURA related pespective and parachain inherents mocking, in this PR, without the lazy loading.

LE: the above can still be done without lazy loading, and with a clean feature branch (updated to latest master). I would say we should assume the forking contributions can take a while to be tested and merged, but they should be based on latest foundry-polkadot master. We can sync about this in the meeting today.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay perfect yeah I was planning to discuss this in the call, so that sounds good.

So diego currently has a PR in our fork with lazy loading forking support, which is working and passing tests, here so I think will probably be opened upstream soon. So I was thinking to base off of this for two reasons: 1) removes the temp forking solution I use here so diff is cleaner 2) uses the latest runtime version to test against.

But that being said I am happy to handle this however you all prefer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants