diff --git a/docs.json b/docs.json
index 659ff78845..5bfbc2674a 100644
--- a/docs.json
+++ b/docs.json
@@ -898,6 +898,7 @@
{
"group": "AIR Development",
"pages": [
+ "learn/S-two-book/air-development/index",
{
"group": "First Breath of AIR",
"pages": [
@@ -929,27 +930,18 @@
]
},
{
- "group": "Stwo: Under the Hood",
+ "group": "S-two: Under the Hood",
"pages": [
"learn/S-two-book/how-it-works/index",
- {
- "group": "AIR",
- "pages": [
- "learn/S-two-book/how-it-works/air/index",
- "learn/S-two-book/how-it-works/air/overview",
- "learn/S-two-book/how-it-works/air/components",
- "learn/S-two-book/how-it-works/air/prover_components"
- ]
- },
"learn/S-two-book/how-it-works/mersenne-prime",
"learn/S-two-book/how-it-works/circle-group",
- "learn/S-two-book/how-it-works/lookups",
{
"group": "Circle Polynomials",
"pages": [
"learn/S-two-book/how-it-works/circle-polynomials/index",
"learn/S-two-book/how-it-works/circle-polynomials/columns",
- "learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly"
+ "learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly",
+ "learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly"
]
},
{
@@ -957,9 +949,27 @@
"pages": [
"learn/S-two-book/how-it-works/circle-fft/index",
"learn/S-two-book/how-it-works/circle-fft/algorithm",
- "learn/S-two-book/how-it-works/circle-fft/basis",
+ "learn/S-two-book/how-it-works/circle-fft/twiddles",
"learn/S-two-book/how-it-works/circle-fft/interpolation",
- "learn/S-two-book/how-it-works/circle-fft/twiddles"
+ "learn/S-two-book/how-it-works/circle-fft/basis"
+ ]
+ },
+ {
+ "group": "Vector Commitment Scheme",
+ "pages": [
+ "learn/S-two-book/how-it-works/vcs/index",
+ "learn/S-two-book/how-it-works/vcs/hash_functions",
+ "learn/S-two-book/how-it-works/vcs/merkle_prover",
+ "learn/S-two-book/how-it-works/vcs/merkle_verifier"
+ ]
+ },
+ {
+ "group": "AIR to Composition Polynomial",
+ "pages": [
+ "learn/S-two-book/how-it-works/air/index",
+ "learn/S-two-book/how-it-works/air/overview",
+ "learn/S-two-book/how-it-works/air/components",
+ "learn/S-two-book/how-it-works/air/prover_components"
]
},
{
@@ -972,7 +982,7 @@
]
},
{
- "group": "PCS",
+ "group": "Polynomial Commitment Scheme",
"pages": [
"learn/S-two-book/how-it-works/pcs/index",
"learn/S-two-book/how-it-works/pcs/overview",
@@ -981,31 +991,18 @@
]
},
{
- "group": "STARK Proof",
+ "group": "Proof Generation and Verification",
"pages": [
"learn/S-two-book/how-it-works/stark_proof/index",
"learn/S-two-book/how-it-works/stark_proof/prove",
"learn/S-two-book/how-it-works/stark_proof/verify"
]
},
- {
- "group": "VCS",
- "pages": [
- "learn/S-two-book/how-it-works/vcs/index",
- "learn/S-two-book/how-it-works/vcs/hash_functions",
- "learn/S-two-book/how-it-works/vcs/merkle_prover",
- "learn/S-two-book/how-it-works/vcs/merkle_verifier"
- ]
- }
+ "learn/S-two-book/how-it-works/lookups"
]
},
- {
- "group": "References",
- "pages": [
- "learn/S-two-book/benchmarks/index",
- "learn/S-two-book/awesome-stwo/index"
- ]
- }
+ "learn/S-two-book/benchmarks/index",
+ "learn/S-two-book/awesome-stwo/index"
]
},
{
diff --git a/learn/S-two-book/air-development/additional-examples/index.mdx b/learn/S-two-book/air-development/additional-examples/index.mdx
index 0b660c3488..dba4f80749 100644
--- a/learn/S-two-book/air-development/additional-examples/index.mdx
+++ b/learn/S-two-book/air-development/additional-examples/index.mdx
@@ -9,7 +9,7 @@ Here, we introduce some additional AIRs that may help in designing more complex
A selector is a column of 0s and 1s that selectively enables or disables a constraint. One example of a selector is the `IsFirst` column which has a value of 1 only on the first row. This can be used when constraints are defined over both the current and previous rows but we need to make an exception for the first row.
-For example, as seen in [Figure 1](#fig-selectors), when we want to track the cumulative sum of a column, i.e. $b_2 = a_1 + a_2$, the previous row of the first row points to the last row, creating an incorrect constraint $a_1 = b_4 + b_1$. Thus, we need to disable the constraint for the first row and enable a separate constraint $a_1 = b_1$. This can be achieved by using a selector column that has a value of 1 on the first row and 0 on the other rows and multiplying the constraint by the selector column:
+For example, as seen in [Figure 1](#figure-1), when we want to track the cumulative sum of a column, i.e. $b_2 = a_1 + a_2$, the previous row of the first row points to the last row, creating an incorrect constraint $a_1 = b_4 + b_1$. Thus, we need to disable the constraint for the first row and enable a separate constraint $a_1 = b_1$. This can be achieved by using a selector column that has a value of 1 on the first row and 0 on the other rows and multiplying the constraint by the selector column:
$$
(1 - \text{IsFirst(X)}) \cdot (A(\omega \cdot X) - B(X) - B(\omega \cdot X)) + \text{IsFirst(X)} \cdot (A(X) - B(X)) = 0
@@ -17,42 +17,46 @@ $$
where $X$ refers to the previous value of $\omega\cdot X$ in the multiplicative subgroup of the finite field.
+

-
*Figure 1: IsFirst selector*
+
## IsZero
Checking that a certain field element is zero is a common use case when writing AIRs. To do this efficiently, we can use the property of finite fields that a non-zero field element always has a multiplicative inverse.
-For example, in [Figure 2](#fig-is-zero), we want to check whether a field element in $\mathbb{F}_5$ is zero. We create a new column that contains the multiplicative inverse of each field element $a_i$. We then use the multiplication of the two columns and check whether the result is 0 or 1. Note that if the existing column has a zero element, we can insert any value in the new column since the multiplication will always be zero.
+For example, in [Figure 2](#figure-2), we want to check whether a field element in $\mathbb{F}_5$ is zero. We create a new column that contains the multiplicative inverse of each field element $a_i$. We then use the multiplication of the two columns and check whether the result is 0 or 1. Note that if the existing column has a zero element, we can insert any value in the new column since the multiplication will always be zero.
This way, we can create a constraint that uses the `IsZero` condition as part of the constraint, e.g. $(1 - (A(X) \cdot Inv(X))) \cdot (\text{constraint\_1}) + (A(X) \cdot Inv(X)) \cdot (\text{constraint\_2}) = 0$, which checks $\text{constraint\_1}$ if $A(X)$ is 0 and $\text{constraint\_2}$ if $A(X)$ is not 0.
+

-
*Figure 2: IsZero over a prime field 5*
+
## Public Inputs
When writing AIRs, we may want to expose some values in the trace to the verifier to check in the open. For example, when running an AIR for a Cairo program, we may want to check that the program that was executed is the correct one.
-In S-two, we can achieve this by adding the public input portion of the trace as a LogUp column as negative multiplicity. As shown in [Figure 3](#fig-public-inputs), the public inputs $a_1, a_2$ are added as LogUp values with negative multiplicity $\frac{-1}{X - a_1}$ and $\frac{-1}{X - a_2}$. The public inputs are given to the verifier as part of the proof and the verifier can directly compute the LogUp values with positive multiplicity $\frac{1}{X - a_1}$ and $\frac{1}{X - a_2}$ and add it to the LogUp sum and check that the total sum is 0.
+In S-two, we can achieve this by adding the public input portion of the trace as a LogUp column as negative multiplicity. As shown in [Figure 3](#figure-3), the public inputs $a_1, a_2$ are added as LogUp values with negative multiplicity $\frac{-1}{X - a_1}$ and $\frac{-1}{X - a_2}$. The public inputs are given to the verifier as part of the proof and the verifier can directly compute the LogUp values with positive multiplicity $\frac{1}{X - a_1}$ and $\frac{1}{X - a_2}$ and add it to the LogUp sum and check that the total sum is 0.
+

-
*Figure 3: Public inputs*
+
-One important thing to note is that the public inputs must be added to the Fiat-Shamir channel before drawing random elements for the interaction trace. We refer the reader to this [example implementation](../../stwo-examples/examples/public_input.rs) for reference.
+One important thing to note is that the public inputs must be added to the Fiat-Shamir channel before drawing random elements for the interaction trace. We refer the reader to this [example implementation](https://github.com/starknet-io/starknet-docs/blob/main/learn/S-two-book/stwo-examples/examples/public_input.rs) for reference.
## XOR
We can also handle XOR operations as part of the AIR. First, as we did in the [Components](../components/index) section, we create a computing component and a scheduling component. Then, we connect the two components using lookups: the computing component sets the LogUp value as a negative multiplicity and the scheduling component sets the same value as a positive multiplicity.
-For example, [Figure 4](#fig-xor) shows the XOR operation for 4-bit integers. To accommodate the entire combination of inputs, the size of the trace for the computing component is $2^8=256$ rows. Note that the size of the trace for the scheduling component does not have to be the same as the computing component.
+For example, [Figure 4](#figure-4) shows the XOR operation for 4-bit integers. To accommodate the entire combination of inputs, the size of the trace for the computing component is $2^8=256$ rows. Note that the size of the trace for the scheduling component does not have to be the same as the computing component.
+

-
*Figure 4: XOR operations for 4-bit integers*
+
Note that for larger integers, we may need to decompose into smaller limbs to avoid creating large tables. Also note that the M31 field does not fully support XOR operations for 31-bit integers since we cannot use $2^{31} -1$, although this is not feasible as it would require a table of size of around $2^{31} \times 2^{31}$.
diff --git a/learn/S-two-book/air-development/components/index.mdx b/learn/S-two-book/air-development/components/index.mdx
index 3e5f8b349e..60d3ddbd33 100644
--- a/learn/S-two-book/air-development/components/index.mdx
+++ b/learn/S-two-book/air-development/components/index.mdx
@@ -13,19 +13,21 @@ One of the most common use cases of components is to separate frequently used fu
To illustrate how to use components, we will create two components where the main component calls a hash function component. For simplicity, instead of an actual hash function, the second component will compute $x^5 + 1$ from an input $x$. This component will have, in total, three columns: [input, intermediate, output], which will correspond to the values $[x, x^3, x^5 + 1]$. Our main component, on the other hand, will have two columns, [input, output], which corresponds to the values $[x, x^5 + 1]$.
-We'll refer to the main component as the **scheduling component** and the hash function component as the **computing component**, since the main component is essentially _scheduling_ the hash function component to run its function with a given input and the hash function component _computes_ on the provided input. As can be seen in [Figure 1](#fig-component-example), the input and output of each component are connected by lookups.
+We'll refer to the main component as the **scheduling component** and the hash function component as the **computing component**, since the main component is essentially _scheduling_ the hash function component to run its function with a given input and the hash function component _computes_ on the provided input. As can be seen in [Figure 1](#figure-1), the input and output of each component are connected by lookups.
+

-
*Figure 1: Scheduling and Computing components*
+
### Design
+

-
*Figure 2: Traces of each component*
+
-When we implement this in S-two, the traces of each component will look like [Figure 2](#fig-component-trace) above. Each component has its own original and LogUp traces, and the inputs and outputs of each component are connected by lookups. Since the scheduling component sets the LogUp value as a positive multiplicity and the computing component sets the same value as a negative multiplicity, the verifier can simply check that the sum of the two LogUp columns is zero. Note that we combine the input and output randomly as
+When we implement this in S-two, the traces of each component will look like [Figure 2](#figure-2) above. Each component has its own original and LogUp traces, and the inputs and outputs of each component are connected by lookups. Since the scheduling component sets the LogUp value as a positive multiplicity and the computing component sets the same value as a negative multiplicity, the verifier can simply check that the sum of the two LogUp columns is zero. Note that we combine the input and output randomly as
$$
\frac{1}{Z - x \cdot \alpha^0 - (x^5+1) \cdot \alpha^1}
@@ -57,7 +59,7 @@ would be valid.
Let's move on to the implementation.
-```rust,ignore
+```rust
fn main() {
// --snip--
@@ -121,7 +123,7 @@ The code above for proving the components should look pretty familiar by now. Si
Let's take a closer look at how the LogUp columns are generated.
-```rust,ignore
+```rust
fn gen_scheduling_logup_trace(
log_size: u32,
scheduling_col_1: &CircleEvaluation,
@@ -163,7 +165,7 @@ As you can see, the LogUp values of the input and output columns of both the sch
Next, let's check how the constraints are created.
-```rust,ignore
+```rust
impl FrameworkEval for SchedulingEval {
// --snip--
@@ -215,7 +217,7 @@ impl FrameworkEval for ComputingEval {
As you can see, we define the LogUp constraints for each component, and we also add two constraints that make sure the computations $x^3$ and $x^5 + 1$ are correct.
-```rust,ignore
+```rust
fn main() {
// --snip--
diff --git a/learn/S-two-book/air-development/dynamic-lookups/index.mdx b/learn/S-two-book/air-development/dynamic-lookups/index.mdx
index 07faec5e83..32827ce347 100644
--- a/learn/S-two-book/air-development/dynamic-lookups/index.mdx
+++ b/learn/S-two-book/air-development/dynamic-lookups/index.mdx
@@ -11,13 +11,14 @@ A permutation check simply checks that two sets of values have the same elements
If you went through the previous section, you should have a good intuition for how to implement this. First, create two original trace columns that each contain a random permutation of the same set of values. Then, create a LogUp column where the first original trace column is added as a fraction with multiplicity $1$ and the second original trace column is added as a fraction with multiplicity $-1$. Then, check that the `claimed_sum`, or the sum of the fractions in the two LogUp columns, is $0$.
+

-
*Figure 1: Permutation check*
+
Let's move on to the implementation.
-```rust,ignore
+```rust
fn gen_trace(log_size: u32) -> Vec> {
let mut rng = rand::thread_rng();
let values = (0..(1 << log_size)).map(|i| i).collect::>();
@@ -88,7 +89,7 @@ Looking at the code above, we can see that it looks very similar to the implemen
Then, we create a LogUp column that contains the values $\frac{1}{original} - \frac{1}{permuted}$.
-```rust,ignore
+```rust
struct TestEval {
log_size: u32,
lookup_elements: LookupElements,
diff --git a/learn/S-two-book/air-development/index.mdx b/learn/S-two-book/air-development/index.mdx
index 3b1ce0e2ea..093ee36695 100644
--- a/learn/S-two-book/air-development/index.mdx
+++ b/learn/S-two-book/air-development/index.mdx
@@ -1,10 +1,10 @@
---
-title: "AIR Development"
+title: "Introduction"
---
> This section is intended for developers who want to create custom proofs using S-two (proofs of custom VMs, ML inference, etc.). It assumes that the reader is familiar with Rust and has some background knowledge of cryptography (e.g. finite fields). It also assumes that the reader is familiar with the concept of proof systems and knows what they want to create proofs for, but it does not assume any prior experience with creating them.
-
+
All the code that appears throughout this section is available [here](../stwo-examples/index).
-
+
diff --git a/learn/S-two-book/air-development/local-row-constraints/index.mdx b/learn/S-two-book/air-development/local-row-constraints/index.mdx
index 81aaa96585..9a7195f0be 100644
--- a/learn/S-two-book/air-development/local-row-constraints/index.mdx
+++ b/learn/S-two-book/air-development/local-row-constraints/index.mdx
@@ -13,7 +13,7 @@ We will implement this in three iterations, fixing a different issue in each ite
## First Try
-```rust,ignore
+```rust
fn evaluate(&self, mut eval: E) -> E {
let unsorted_col = eval.next_trace_mask();
let [sorted_col_prev_row, sorted_col_curr_row] =
@@ -44,13 +44,13 @@ We will implement this in three iterations, fixing a different issue in each ite
The logic for creating the trace and LogUp columns is basically the same as in the previous section (except that one of the columns is now sorted), so we omit them for brevity.
-Another change is in the `evaluate` function, where we call `eval.next_interaction_mask(ORIGINAL_TRACE_IDX, [-1, 0])` instead of `eval.next_trace_mask()`. The function `next_trace_mask()` is a wrapper for `next_interaction_mask(ORIGINAL_TRACE_IDX, [0])`, where the first parameter specifies which part of the trace to retrieve values from (see [this figure](../static-lookups/index.mdx#fig-range-check) for an example of the different parts of a trace). Since we want to retrieve values from the original trace, we set the value of the first parameter to `ORIGINAL_TRACE_IDX`. Next, the second parameter indicates the row offset of the value we want to retrieve. Since we want to retrieve both the previous and current row values for the sorted column, we set the value of the second parameter to `[-1, 0]`.
+Another change is in the `evaluate` function, where we call `eval.next_interaction_mask(ORIGINAL_TRACE_IDX, [-1, 0])` instead of `eval.next_trace_mask()`. The function `next_trace_mask()` is a wrapper for `next_interaction_mask(ORIGINAL_TRACE_IDX, [0])`, where the first parameter specifies which part of the trace to retrieve values from (see [this figure](./static-lookups/#figure-1) for an example of the different parts of a trace). Since we want to retrieve values from the original trace, we set the value of the first parameter to `ORIGINAL_TRACE_IDX`. Next, the second parameter indicates the row offset of the value we want to retrieve. Since we want to retrieve both the previous and current row values for the sorted column, we set the value of the second parameter to `[-1, 0]`.
Once we have these values, we can now assert that the difference between the current and previous row is always one with the constraint: `E::F::one() - (sorted_col_curr_row.clone() - sorted_col_prev_row.clone())`.
-
-But this will fail with a `ConstraintsNotSatisfied` error, can you see why? (You can try running it yourself [here](../../stwo-examples/examples/local_row_constraints_fails_1.rs))
-
+
+But this will fail with a `ConstraintsNotSatisfied` error, can you see why? (You can try running it yourself [here](https://github.com/starknet-io/starknet-docs/blob/main/learn/S-two-book/stwo-examples/examples/local_row_constraints_fails_1.rs))
+
## Second Try
@@ -58,9 +58,9 @@ The issue was that when calling `evaluate` on the first row of our trace, the pr
This means that in our example, we are expecting the `0 - 15 = 1` constraint to hold, which is clearly not true.
-To fix this, we can use the `IsFirstColumn` preprocessed column that we created in the [Preprocessed Trace](../preprocessed-trace/index) section. So we will copy over the same code for creating the preprocessed column and modify our new constraint as follows:
+To fix this, we can use the `IsFirstColumn` preprocessed column that we created in the [Preprocessed Trace](./preprocessed-trace/) section. So we will copy over the same code for creating the preprocessed column and modify our new constraint as follows:
-```rust,ignore
+```rust
let is_first_col = eval.get_preprocessed_column(self.is_first_id.clone());
eval.add_constraint(
@@ -71,7 +71,9 @@ To fix this, we can use the `IsFirstColumn` preprocessed column that we created
Now, we have a constraint that is disabled for the first row, which is exactly what we want.
-Still, however, this will fail with the same `ConstraintsNotSatisfied` error. (You can run it [here](../../stwo-examples/examples/local_row_constraints_fails_2.rs))
+
+Still, however, this will fail with the same `ConstraintsNotSatisfied` error. (You can run it [here](https://github.com/starknet-io/starknet-docs/blob/main/learn/S-two-book/stwo-examples/examples/local_row_constraints_fails_2.rs))
+
## Third Try
@@ -79,7 +81,7 @@ So when we were creating `CircleEvaluation` instances from our `BaseColumn` inst
Thus, every time we create a `CircleEvaluation` instance, we need to convert the order of the values in the `BaseColumn` beforehand.
-```rust,ignore
+```rust
impl IsFirstColumn {
...
impl IsFirstColumn {
@@ -112,10 +114,10 @@ fn gen_trace(log_size: u32) -> Vec
+
Things to consider when implementing constraints over multiple rows:
1. Change the order of elements in `BaseColumn` in-place via `bit_reverse_coset_to_circle_domain_order` before creating a `CircleEvaluation` instance. This is required because S-two assumes that the values are in the bit-reversed, circle domain order.
2. For the first row, the 'previous' row is the last row of the trace, so you may need to disable the constraint for the first row. This is typically done by using a preprocessed column.
-
+
diff --git a/learn/S-two-book/air-development/preprocessed-trace/index.mdx b/learn/S-two-book/air-development/preprocessed-trace/index.mdx
index 45775222a6..263640845a 100644
--- a/learn/S-two-book/air-development/preprocessed-trace/index.mdx
+++ b/learn/S-two-book/air-development/preprocessed-trace/index.mdx
@@ -15,19 +15,20 @@ $$
(1 - \text{selector}) \cdot \text{constraint}_1 + \text{selector} \cdot \text{constraint}_2 = 0
$$
+

-
*Figure 1: Preprocessed trace as a selector*
+
Another use case is to use the preprocessed trace to _express constant values used in constraints_. For example, when creating a hash function in an AIR, we often need to use round constants, which the verifier needs to be able to verify or the resulting hash may be invalid. We can also "look up" the constant values as an optimization technique, which we will discuss in more detail in the next section.
In this section, we will explore how to implement a preprocessed trace as a selector, and we will implement the simplest form: a single `IsFirst` column, where the value is 1 for the first row and 0 for all other rows.
-
-Boilerplate code is omitted for brevity. Please refer to the [full example code](../../stwo-examples/examples/preprocessed_trace.rs) for the full implementation.
-
+
+Boilerplate code is omitted for brevity. Please refer to the [full example code](https://github.com/starknet-io/starknet-docs/blob/main/learn/S-two-book/stwo-examples/examples/preprocessed_trace.rs) for the full implementation.
+
-```rust,ignore
+```rust
struct IsFirstColumn {
pub log_size: u32,
}
@@ -54,7 +55,7 @@ impl IsFirstColumn {
First, we need to define an `IsFirstColumn` struct that will be used as a preprocessed trace. We will use the `gen_column()` function to generate a `CircleEvaluation` struct that is 1 for the first row and 0 for all other rows. The `id()` function is needed to identify this column when evaluating the constraints.
-```rust,ignore
+```rust
fn main() {
...
// Create and commit to the preprocessed trace
@@ -77,7 +78,7 @@ fn main() {
Then, in our main function, we will create and commit to the preprocessed and original traces. For those of you who are curious about why we need to commit to the trace, please refer to the [Committing to the Trace Polynomials](../writing-a-simple-air/committing-to-the-trace-polynomials) section.
-```rust,ignore
+```rust
struct TestEval {
is_first_id: PreProcessedColumnId,
log_size: u32,
@@ -116,7 +117,7 @@ Now that we have the traces, we need to create a struct that contains the logic
If you're unfamiliar with how `max_constraint_log_degree_bound(&self)` should be implemented, please refer to [this note](../writing-a-simple-air/constraints-over-trace-polynomials.mdx#max_constraint_log_degree_bound).
-```rust,ignore
+```rust
fn main() {
...
// Create a component
diff --git a/learn/S-two-book/air-development/static-lookups/index.mdx b/learn/S-two-book/air-development/static-lookups/index.mdx
index 02ce70cca8..4c4a382429 100644
--- a/learn/S-two-book/air-development/static-lookups/index.mdx
+++ b/learn/S-two-book/air-development/static-lookups/index.mdx
@@ -39,13 +39,14 @@ In S-two, these fractions (which we will hereafter refer to as _LogUp fractions_
We will now walk through the implementation of a static lookup, which is a lookup where the values that are being looked up are static, i.e. part of the preprocessed trace. Specifically, we will implement a **range-check AIR**, which checks that a certain value is within a given range. This is especially useful for frameworks like S-two that use finite fields because it allows checking for underflow and overflow.
-A range-check checks that all values in a column are within a certain range. For example, as in [Figure 1](#fig-range-check), we can check that all values in the range-checked columns are between 0 and 3. We do this by first creating a multiplicity column that counts the number of times each value in the preprocessed trace appears in the range-checked columns.
+A range-check checks that all values in a column are within a certain range. For example, as in [Figure 1](#figure-1), we can check that all values in the range-checked columns are between 0 and 3. We do this by first creating a multiplicity column that counts the number of times each value in the preprocessed trace appears in the range-checked columns.
Then, we create two LogUp columns as part of the interaction trace. The first column contains in each row a fraction with numerator equal to the multiplicity and denominator equal to the random linear combination of the value in the range column. For example, for row 1, the fraction should be $\frac{2}{X-0}$, where $X$ is a random value. The second column contains batches of fractions where the denominator of each fraction is the random linear combination of the value in the range-checked column. Note that the numerator of each fraction is always -1, i.e. we apply a negation, because we want the sum of the first column to be equal to the sum of the second column.
+

-
*Figure 1: Range-check lookup*
+
If we add all the fractions in the two columns together, we get 0. This means that the verifier will be convinced with high probability that the values in the range-checked columns are a subset of the values in the range column.
@@ -53,7 +54,7 @@ If we add all the fractions in the two columns together, we get 0. This means th
Now let's move on to the implementation where we create a 4-bit range-check AIR. We do this by creating a preprocessed trace column with the integers $[0, 16)$, then using a lookup to force the values in the original trace columns to lie in the values of the preprocessed column.
-```rust,ignore
+```rust
struct RangeCheckColumn {
pub log_size: u32,
}
@@ -79,7 +80,7 @@ impl RangeCheckColumn {
First, we need to create the range-check column as a preprocessed column. This should look familiar to the code from the previous section.
-```rust,ignore
+```rust
fn gen_trace(log_size: u32) -> Vec> {
// Create a table with random values
let mut rng = rand::thread_rng();
@@ -113,7 +114,7 @@ fn gen_trace(log_size: u32) -> Vec

-
*Figure 2: Finalizing each LogUp column*
+
Finally, we need to call `finalize_last()` on the `LogupTraceGenerator` instance to finalize the LogUp columns, which will return the LogUp columns as well as the sum of the fractions in the LogUp columns.
-```rust,ignore
+```rust
struct TestEval {
range_check_id: PreProcessedColumnId,
log_size: u32,
@@ -243,7 +245,7 @@ The last piece of the puzzle is to create the constraints. We use the same `Test
Once we add the fractions as constraints, we call the `finalize_logup_batched` function, which indicates how we want to batch the fractions. In our case, we added 3 fractions but want to create batches where the last two fractions are batched together, so we pass in `&vec![0, 1, 1]`.
-```rust,ignore
+```rust
// Verify
assert_eq!(claimed_sum, SecureField::zero());
@@ -263,7 +265,7 @@ When we verify the proof, as promised, we check that the `claimed_sum`, which is
And that's it! We have successfully created a static lookup for a range-check.
-
+
**How many fractions can we batch together?**
This depends on how we set the `max_constraint_log_degree_bound` function, as discussed in this [note](../writing-a-simple-air/constraints-over-trace-polynomials.md#max_constraint_log_degree_bound). More specifically, we can batch up to exactly the blowup factor.
@@ -275,8 +277,8 @@ e.g.
- `self.log_size + 3` -> 8 fractions
- `self.log_size + 4` -> 16 fractions
- ...
-
+
-
-Note that unlike what [Figure 1](#fig-range-check) shows, the size of the range column and the range-checked columns do not have to be the same. As we will learn in the [Components](../components/index) section, we can create separate components for the range-check and the range-checked columns to support such cases.
-
+
+Note that unlike what [Figure 1](#figure-1) shows, the size of the range column and the range-checked columns do not have to be the same. As we will learn in the [Components](../components/index) section, we can create separate components for the range-check and the range-checked columns to support such cases.
+
diff --git a/learn/S-two-book/air-development/writing-a-simple-air/committing-to-the-trace-polynomials.mdx b/learn/S-two-book/air-development/writing-a-simple-air/committing-to-the-trace-polynomials.mdx
index cae410a342..0db562e3ea 100644
--- a/learn/S-two-book/air-development/writing-a-simple-air/committing-to-the-trace-polynomials.mdx
+++ b/learn/S-two-book/air-development/writing-a-simple-air/committing-to-the-trace-polynomials.mdx
@@ -2,16 +2,16 @@
title: "Committing to the Trace Polynomials"
---
-
+

-
*Figure 1: Prover workflow: Commitment*
+
Now that we have created the trace polynomials, we need to commit to them.
-As we can see in [Figure 1](#fig-committing-to-the-trace-polynomials-1), S-two commits to the trace polynomials by first expanding the trace polynomials (i.e. adding more evaluations) and then committing to the expanded evaluations using a Merkle tree. The rate of expansion (commonly referred to as the _blowup factor_) is a parameter of the FRI protocol and for the purposes of this tutorial, we will use the default value.
+As we can see in [Figure 1](#figure-1), S-two commits to the trace polynomials by first expanding the trace polynomials (i.e. adding more evaluations) and then committing to the expanded evaluations using a Merkle tree. The rate of expansion (commonly referred to as the _blowup factor_) is a parameter of the FRI protocol and for the purposes of this tutorial, we will use the default value.
-```rust,ignore
+```rust
const LOG_CONSTRAINT_EVAL_BLOWUP_FACTOR: u32 = 1;
fn main() {
diff --git a/learn/S-two-book/air-development/writing-a-simple-air/constraints-over-trace-polynomials.mdx b/learn/S-two-book/air-development/writing-a-simple-air/constraints-over-trace-polynomials.mdx
index 6fd3c1732f..b87287a62b 100644
--- a/learn/S-two-book/air-development/writing-a-simple-air/constraints-over-trace-polynomials.mdx
+++ b/learn/S-two-book/air-development/writing-a-simple-air/constraints-over-trace-polynomials.mdx
@@ -3,9 +3,10 @@ title: "Evaluating Constraints Over Trace Polynomials"
---
+

-
*Figure 1: Prover workflow: Constraints*
+
## Proving Spreadsheet Functions
@@ -13,15 +14,16 @@ When we want to perform computations over the cells in a spreadsheet, we don't w
We can do the same thing with our table, except in addition to autofilling cells, we can also create a constraint that the result was computed correctly. Remember that the purpose of using a proof system is that the verifier can verify a computation was executed correctly without having to execute it themselves? Well, that's exactly why we need to create a constraint.
-Now let's say we want to add a new column `C` to our spreadsheet that computes the product of the previous columns plus the first column. We can set `C1` as `A1 * B1 + A1` as in [Figure 2](#fig-constraints-over-trace-polynomials-2). The corresponding constraint is expressed as `C1 = A1 * B1 + A1`. However, we use an alternate representation `A1 * B1 + A1 - C1 = 0` because we can only enforce constraints stating that an expression should equal zero. Generalizing this constraint to the whole column, we get `col1_row1 * col2_row1 + col1_row1 - col3_row1 = 0`.
+Now let's say we want to add a new column `C` to our spreadsheet that computes the product of the previous columns plus the first column. We can set `C1` as `A1 * B1 + A1` as in [Figure 2](#figure-2). The corresponding constraint is expressed as `C1 = A1 * B1 + A1`. However, we use an alternate representation `A1 * B1 + A1 - C1 = 0` because we can only enforce constraints stating that an expression should equal zero. Generalizing this constraint to the whole column, we get `col1_row1 * col2_row1 + col1_row1 - col3_row1 = 0`.
+

-
*Figure 2: Proving spreadsheet functions as constraints*
+
## Identical Constraints Every Row
-Obviously, as can be seen in [Figure 2](#fig-constraints-over-trace-polynomials-2), our new constraint is satisfied for every row in the table. This means that we can substitute creating a constraint for each row with a single constraint over the columns, i.e. the trace polynomials.
+Obviously, as can be seen in [Figure 2](#figure-2), our new constraint is satisfied for every row in the table. This means that we can substitute creating a constraint for each row with a single constraint over the columns, i.e. the trace polynomials.
Thus, `col1_row1 * col2_row1 + col1_row1 - col3_row1 = 0` becomes:
@@ -29,11 +31,11 @@ $$
f_1(x,y) \cdot f_2(x,y) + f_1(x,y) - f_3(x,y) = 0
$$
-
+
The idea that all rows must have the same constraint may seem restrictive, compared to say a spreadsheet where we can define different functions for different rows. However, we will show in later sections how to handle such use-cases.
(Spoiler alert: it involves selectors and components)
-
+
## Composition Polynomial
@@ -45,15 +47,15 @@ $$
Basically, in order to prove that the constraints are satisfied, we need to show that the composition polynomial evaluates to 0 over the original domain (i.e. the domain of size the number of rows in the table).
-But first, as can be seen in the upper part of [Figure 1](#fig-constraints-over-trace-polynomials-1), we need to expand the evaluations of the trace polynomials by a factor of 2. This is because when you multiply two trace polynomials of degree `n-1` (where `n` is the number of rows) to compute the constraint polynomial, the degree of the constraint polynomial will be the sum of the degrees of the trace polynomials, which is `2n-2`. To adjust for this increase in degree, we double the number of evaluations.
+But first, as can be seen in the upper part of [Figure 1](#figure-1), we need to expand the evaluations of the trace polynomials by a factor of 2. This is because when you multiply two trace polynomials of degree `n-1` (where `n` is the number of rows) to compute the constraint polynomial, the degree of the constraint polynomial will be the sum of the degrees of the trace polynomials, which is `2n-2`. To adjust for this increase in degree, we double the number of evaluations.
-Once we have the expanded evaluations, we can evaluate the composition polynomial $C(x,y)$. Since we need to do a FRI operation on the composition polynomial as well, we expand the evaluations again by a factor of 2 and commit to them as a merkle tree. This part corresponds to the bottom part of [Figure 1](#fig-constraints-over-trace-polynomials-1).
+Once we have the expanded evaluations, we can evaluate the composition polynomial $C(x,y)$. Since we need to do a FRI operation on the composition polynomial as well, we expand the evaluations again by a factor of 2 and commit to them as a merkle tree. This part corresponds to the bottom part of [Figure 1](#figure-1).
## Implementation
Let's see how this is implemented in the code.
-```rust,ignore
+```rust
struct TestEval {
log_size: u32,
}
@@ -108,15 +110,16 @@ First, we add a new column `col_3` that contains the result of the computation:
Then, to create a constraint over the trace polynomials, we first create a `TestEval` struct that implements the `FrameworkEval` trait. Then, we add our constraint logic in the `FrameworkEval::evaluate` function. Note that this function is called for every row in the table, so we only need to define the constraint once.
-Inside `FrameworkEval::evaluate`, we call `eval.next_trace_mask()` consecutively three times, retrieving the cell values of all three columns (see [Figure 3](#fig-constraints-over-trace-polynomials-3) below for a visual representation). Once we retrieve all three column values, we add a constraint of the form `col_1 * col_2 + col_1 - col_3`, which should equal 0. Note that `FrameworkEval::evaluate` will be called for every row in the table.
+Inside `FrameworkEval::evaluate`, we call `eval.next_trace_mask()` consecutively three times, retrieving the cell values of all three columns (see [Figure 3](#figure-3) below for a visual representation). Once we retrieve all three column values, we add a constraint of the form `col_1 * col_2 + col_1 - col_3`, which should equal 0. Note that `FrameworkEval::evaluate` will be called for every row in the table.
+

-
*Figure 3: Evaluate function*
+
We also need to implement `FrameworkEval::max_constraint_log_degree_bound(&self)` for `FrameworkEval`. As mentioned in the [Composition Polynomial section](#composition-polynomial), we need to expand the trace polynomial evaluations because the degree of our composition polynomial is higher than that of the trace polynomial. Expanding it by `LOG_CONSTRAINT_EVAL_BLOWUP_FACTOR=1` is sufficient for our example as the total degree of the highest degree term $f_1(x,y) \cdot f_2(x,y)$ is 2, so we return `self.log_size + LOG_CONSTRAINT_EVAL_BLOWUP_FACTOR`. For those who are interested in how to set this value in general, we leave a detailed note below.
-
+
**What value to set for `max_constraint_log_degree_bound(&self)`?**
`self.log_size + max(1, ceil(log2(max_degree - 1)))`, where `max_degree` is the maximum total degree of all defined constraint polynomials. For example, the `max_degree` of constraint $f_1(x,y) \cdot f_2(x,y) = 0$ is 2, while that of $f_1(x,y) \cdot f_1(x,y) \cdot f_2(x,y) \cdot f_3(x,y) = 0$ is 4.
@@ -127,13 +130,13 @@ e.g.
- degree 6 - 9: `self.log_size + 3`
- degree 10 - 17: `self.log_size + 4`
- ...
-
+
-
+
Now that we know the degree of the composition polynomial, we can now why we need to set the `log_size` of the domain to `log_num_rows + LOG_CONSTRAINT_EVAL_BLOWUP_FACTOR + config.fri_config.log_blowup_factor` when precomputing twiddles in the following code:
-
+
-```rust,ignore
+```rust
// Precompute twiddles for evaluating and interpolating the trace
let twiddles = SimdBackend::precompute_twiddles(
CanonicCoset::new(
@@ -155,7 +158,7 @@ Using the new `TestEval` struct, we can create a new `FrameworkComponent::
+
Finally, we can break down what an Algebraic Intermediate Representation (AIR) means.
*Algebraic* means that we are using polynomials to represent the constraints.
@@ -163,4 +166,4 @@ Finally, we can break down what an Algebraic Intermediate Representation (AIR) m
*Intermediate Representation* means that this is a modified representation of our statement so that it can be proven.
So AIR is just another way of saying that we are representing statements to be proven as constraints over polynomials.
-
+
diff --git a/learn/S-two-book/air-development/writing-a-simple-air/from-spreadsheet-to-trace-polynomials.mdx b/learn/S-two-book/air-development/writing-a-simple-air/from-spreadsheet-to-trace-polynomials.mdx
index 30e81c379e..91e6e7e121 100644
--- a/learn/S-two-book/air-development/writing-a-simple-air/from-spreadsheet-to-trace-polynomials.mdx
+++ b/learn/S-two-book/air-development/writing-a-simple-air/from-spreadsheet-to-trace-polynomials.mdx
@@ -3,21 +3,22 @@ title: "From Spreadsheet to Trace Polynomials"
---
+

-
*Figure 1: Prover workflow: Trace polynomials*
-
+
In the previous section, we created a table (aka spreadsheet). In this section, we will convert the table into something called trace polynomials.
+

-
*Figure 2: From spreadsheet to trace polynomials*
+
In STARKs, the computation trace is represented as evaluations of a polynomial over some domain. Typically this domain is a coset of a multiplicative subgroup. But since the multiplicative subgroup of M31 is not smooth, S-two works over the circle group which is the subgroup of degree-2 extension of M31 (as explained in the [Mersenne Primes](../../how-it-works/mersenne-prime) and [Circle Group](../../how-it-works/circle-group) sections). Thus the domain in S-two is formed of points $(x_i, y_i)$ on the circle curve. Note that when we interpolate a polynomial over the points on the circle curve, we get a bivariate trace polynomial $f_j(x, y)$.
We will explain why using a polynomial representation is useful in the next section, but for now, let's see how we can create trace polynomials for our code. Note that we are building upon the code from the previous section, so there's not much new code here.
-```rust,ignore
+```rust
fn main() {
// --snip--
@@ -31,6 +32,6 @@ fn main() {
}
```
-Here, `domain` refers to the $(x_i, y_i)$ values used to interpolate the trace polynomials. For example, $(x_1, y_1), (x_2, y_2)$ in [Figure 2](#fig-from-spreadsheet-to-trace-polynomials-2) are the domain values for our example. Note that when creating the domain, we set the `log_num_rows` to the log of the actual number of rows that are used in the table. In our example, we set it to 4 since S-two requires that we use at least 16 rows. For a background on what `CanonicCoset` and `.circle_domain()` mean, you can refer to the [Circle Group](../../how-it-works/circle-group) section.
+Here, `domain` refers to the $(x_i, y_i)$ values used to interpolate the trace polynomials. For example, $(x_1, y_1), (x_2, y_2)$ in [Figure 2](#figure-2) are the domain values for our example. Note that when creating the domain, we set the `log_num_rows` to the log of the actual number of rows that are used in the table. In our example, we set it to 4 since S-two requires that we use at least 16 rows. For a background on what `CanonicCoset` and `.circle_domain()` mean, you can refer to the [Circle Group](../../how-it-works/circle-group) section.
Now that we have created 2 trace polynomials for our 2 columns, let's move on to the next section where we commit to those polynomials!
diff --git a/learn/S-two-book/air-development/writing-a-simple-air/index.mdx b/learn/S-two-book/air-development/writing-a-simple-air/index.mdx
index fa979cbad2..4c3462f6f8 100644
--- a/learn/S-two-book/air-development/writing-a-simple-air/index.mdx
+++ b/learn/S-two-book/air-development/writing-a-simple-air/index.mdx
@@ -1,11 +1,12 @@
---
-title: "First Breath of AIR"
+title: "Introduction"
---
+

-
*Figure 1: Proving lifecycle in S-two*
+
Welcome to the guide for writing AIRs in S-two!
diff --git a/learn/S-two-book/air-development/writing-a-simple-air/proving-an-air.mdx b/learn/S-two-book/air-development/writing-a-simple-air/proving-an-air.mdx
index 4a86308153..3fb878a2b8 100644
--- a/learn/S-two-book/air-development/writing-a-simple-air/proving-an-air.mdx
+++ b/learn/S-two-book/air-development/writing-a-simple-air/proving-an-air.mdx
@@ -2,16 +2,15 @@
title: "Proving and Verifying an AIR"
---
-
+

-
*Figure 1: Prover workflow: perform FRI and PoW*
-
+
We're finally ready for the last step: prove and verify an AIR!
Since the code is relatively short, let us present it first and then go over the details.
-```rust,ignore
+```rust
fn main() {
// --snip--
@@ -33,14 +32,16 @@ fn main() {
## Prove
-As you can see, there is only a single line of code added to create the proof. The `prove` function performs the FRI and PoW operations under the hood, although, technically, the constraint-related steps in [Figure 1](#fig-proving-an-air-1) were not performed in the previous section and are only performed once `prove` is called.
+As you can see, there is only a single line of code added to create the proof. The `prove` function performs the FRI and PoW operations under the hood, although, technically, the constraint-related steps in [Figure 1](#figure-1) were not performed in the previous section and are only performed once `prove` is called.
## Verify
In order to verify our proof, we need to check that the constraints are satisfied using the commitments from the proof. In order to do that, we need to set up a `Blake2sChannel` and `CommitmentSchemeVerifier`, along with the same `PcsConfig` that we used when creating the proof. Then, we need to recreate the Fiat-Shamir channel by passing the Merkle tree commitments and the `log_num_rows` to the `CommitmentSchemeVerifier` instance by calling `commit` (remember: the order is important!). Then, we can verify the proof using the `verify` function.
-
-Try setting the dummy values in the table to 1 instead of 0. Does it fail? If so, can you see why?
+
+Try setting the dummy values in the table to 1 instead of 0.
+
+Does it fail? If so, can you see why?
Congratulations! We have come full circle. We now know how to create a table, convert it to trace polynomials, commit to them, create constraints over the trace polynomials, and prove and verify the constraints (i.e. an AIR). In the following sections, we will go over some more complicated AIRs to explain S-two's other features.
diff --git a/learn/S-two-book/air-development/writing-a-simple-air/writing-a-spreadsheet.mdx b/learn/S-two-book/air-development/writing-a-simple-air/writing-a-spreadsheet.mdx
index 8132095fe6..1ab87a69cc 100644
--- a/learn/S-two-book/air-development/writing-a-simple-air/writing-a-spreadsheet.mdx
+++ b/learn/S-two-book/air-development/writing-a-simple-air/writing-a-spreadsheet.mdx
@@ -3,27 +3,31 @@ title: "Writing a Spreadsheet"
---
+

-
*Figure 1: Prover workflow: Create a table*
+
-In order to write a proof, we first need to create a table of rows and columns. This is no different from writing integers to an Excel spreadsheet as we can see in [Figure 2](#fig-writing-a-spreadsheet-2).
+In order to write a proof, we first need to create a table of rows and columns. This is no different from writing integers to an Excel spreadsheet as we can see in [Figure 2](#figure-2).
+

-
*Figure 2: From spreadsheet to table*
+
But there is a slight caveat to consider when creating the table. S-two implements [SIMD operations](https://en.wikipedia.org/wiki/Single_instruction,_multiple_data) to speed up the prover in the CPU, but this requires providing the table cells in chunks of 16 rows. Simply put, this is because S-two supports 16 lanes of 32-bit integers, which means that the same instruction can be run simultaneously for 16 different data.
-Alas, for our table, we will need to create 14 dummy rows to make the total number of rows equal to 16, as shown in [Figure 3](#fig-writing-a-spreadsheet-3). For the sake of simplicity, however, we will omit the dummy rows in the diagrams of the following sections.
+Alas, for our table, we will need to create 14 dummy rows to make the total number of rows equal to 16, as shown in [Figure 3](#figure-3). For the sake of simplicity, however, we will omit the dummy rows in the diagrams of the following sections.
+

-
*Figure 3: Creating a table with 16 rows*
+
+
Given all that, let's create this table using S-two.
-```rust,ignore
+```rust
use stwo::prover::{
backend::{
simd::{column::BaseColumn, m31::N_LANES},
diff --git a/learn/S-two-book/cairo-air/add-opcode/index.mdx b/learn/S-two-book/cairo-air/add-opcode/index.mdx
index aba4756efc..05d8a3fea4 100644
--- a/learn/S-two-book/cairo-air/add-opcode/index.mdx
+++ b/learn/S-two-book/cairo-air/add-opcode/index.mdx
@@ -5,13 +5,14 @@ title: "ADD Opcode Walkthrough"
To better understand how `Opcode` components work, let's walk through the implementation of the `AddOpcode` component.
+

-
*Figure 1: The `AddOpcode` columns*
+
Above is the list of all the columns used in the `AddOpcode` component. Note that the `dst_val`, `op0_val`, and `op1_val` columns are actually 28 columns each (to support 28 9-bit limbs), but we show them as single columns for brevity.
-To reiterate what an `Opcode` component does from the [Main Components](../main-components/index.mdx#opcode-component) section, it verifies the following:
+To reiterate what an `Opcode` component does from the [Main Components](./main-components/index#opcode-component) section, it verifies the following:
1. The offsets and flag values are correct using the `VerifyInstruction` component.
2. The instruction is correctly mapped to the current `Opcode` component using the flags.
@@ -19,25 +20,27 @@ To reiterate what an `Opcode` component does from the [Main Components](../main-
4. The operation for the current `Opcode` component is done correctly.
5. The state transition of the three registers (`pc`, `ap`, `fp`) is done correctly using the flags.
-
-An instruction should contain 15 bits of flags but only 5 bits are represented in the `AddOpcode` columns. Can you see why?
+
+An instruction should contain 15 bits of flags but only 5 bits are represented in the `AddOpcode` columns.
+
+Can you see why?
-
+
Check out how a Cairo instruction is pattern-matched to an `ADD` opcode [here](https://github.com/starkware-libs/stwo-cairo/blob/04f142e0376e2a0e797cdb64973449aa3e58a18f/stwo_cairo_prover/crates/adapter/src/opcodes.rs#L349).
-
+
-
+
Check out how the decomposition of the flags is verified [here](https://github.com/starkware-libs/stwo-cairo/blob/04f142e0376e2a0e797cdb64973449aa3e58a18f/stwo_cairo_prover/crates/cairo-air/src/components/subroutines/decode_instruction_bc3cd.rs#L59).
-
+
-Items 1 and 3 should be familiar, as we already covered them in the [Main Components](../main-components/index.mdx#opcode-component) section. For item 2, you can check the specs for the `ADD` opcode [here](https://github.com/starkware-libs/stwo-cairo/blob/04f142e0376e2a0e797cdb64973449aa3e58a18f/stwo_cairo_prover/crates/adapter/src/opcodes.rs#L349). For item 5, the specs for a valid state transition can be found in Section 4.5 of the [Cairo paper](https://eprint.iacr.org/2021/1063.pdf).
+Items 1 and 3 should be familiar, as we already covered them in the [Main Components](../main-components/index#opcode-component) section. For item 2, you can check the specs for the `ADD` opcode [here](https://github.com/starkware-libs/stwo-cairo/blob/04f142e0376e2a0e797cdb64973449aa3e58a18f/stwo_cairo_prover/crates/adapter/src/opcodes.rs#L349). For item 5, the specs for a valid state transition can be found in Section 4.5 of the [Cairo paper](https://eprint.iacr.org/2021/1063.pdf).
In this section, we will focus on how item 4 is implemented.
## Adding two 252-bit integers
-Assuming that the operands `op0`, `op1`, and `dst` are correctly accessed from the `Memory` table, we now check that the addition of two 252-bit integers is done correctly, i.e. `op0 + op1 = dst`. As noted in the [Felt252 to M31](../basic-building-blocks/index.mdx#felt252-to-m31) section, a 252-bit integer is stored as 28 9-bit limbs, so we need to check addition for each of the 28 limbs.
+Assuming that the operands `op0`, `op1`, and `dst` are correctly accessed from the `Memory` table, we now check that the addition of two 252-bit integers is done correctly, i.e. `op0 + op1 = dst`. As noted in the [Felt252 to M31](../basic-building-blocks/index#felt252-to-m31) section, a 252-bit integer is stored as 28 9-bit limbs, so we need to check addition for each of the 28 limbs.
We will incrementally build up to the final constraint.
@@ -45,7 +48,7 @@ We will incrementally build up to the final constraint.
To verify that the two sets of limbs are correctly added, we check limb-wise addition. Since each limb can create a carry, we also add the carry from the previous limb (except for the first limb). Thus, the constraints look like this:
-```
+```python
carry_limb_1 = (op0[0] + op1[0] - dst[0]) / 2^9
carry_limb_1 * (carry_limb_1 - 1) = 0
@@ -65,7 +68,7 @@ We also need to account for addition overflowing the 252-bit prime field `P` (i.
Now let's revisit the constraints:
-```
+```python
sub_p_bit * (sub_p_bit - 1) = 0
carry_limb_1 = (op0[0] + op1[0] - dst[0] - sub_p_bit) / 2^9
@@ -85,7 +88,7 @@ First, we verify that `sub_p_bit` is a bit. Then, we subtract `sub_p_bit` from t
A caveat of this approach is that subtracting `sub_p_bit` can introduce an underflow, i.e., `(op0[0] + op1[0] - dst[0] - sub_p_bit) / 2^9 = -1`. This means that `carry_limb_0` can be `-1` as well as `0` or `1`. Thus, we update the constraint for all carries to the following:
-```
+```python
...
carry_limb_1 * (carry_limb_1 - 1) * (carry_limb_1 + 1) = 0
@@ -97,7 +100,7 @@ carry_limb_1 * (carry_limb_1 - 1) * (carry_limb_1 + 1) = 0
To optimize the number of constraints, we can combine all the constraints for each limb into a single constraint. Naively checking that `carry_limb_1 = (op0[0] + op1[0] - dst[0] - sub_p_bit) / 2^9` would require a dedicated column for `carry_limb_1`. Instead, we keep `carry_limb_1` as an intermediate value and inline the equation when computing the next carry. For example, the second-limb carry is computed as follows:
-```
+```python
carry_limb_2 = (op0[1] + op1[1] + ((op0[0] + op1[0] - dst[0] - sub_p_bit) / 2^9) - dst[1]) / 2^9
```
@@ -111,7 +114,7 @@ One last note: in the implementation, we replace division by `2^9` with multipli
So the final constraint looks like this:
-```
+```python
sub_p_bit * (sub_p_bit - 1) = 0
carry_limb_1 = (op0[0] + op1[0] - dst[0] - sub_p_bit) * 2^22 // intermediate representation
diff --git a/learn/S-two-book/cairo-air/basic-building-blocks/index.mdx b/learn/S-two-book/cairo-air/basic-building-blocks/index.mdx
index f15008da14..ca44908c80 100644
--- a/learn/S-two-book/cairo-air/basic-building-blocks/index.mdx
+++ b/learn/S-two-book/cairo-air/basic-building-blocks/index.mdx
@@ -11,6 +11,6 @@ Cairo works over the prime field $P = 2^{251} + 17 \cdot 2^{192} + 1$, while S-t
## Range checks
-Range-checks are very commonly used in the Cairo AIR. They are used to ensure that the witness values are within a certain range, most commonly within a certain bit length. For example, in the [Felt252 to M31](../basic-building-blocks/index.mdx#felt252-to-m31) section, we saw that a 252-bit integer is decomposed into 28 9-bit limbs, so we need to verify that each limb is in the range $0 \leq \text{limb} < 2^{9}$.
+Range-checks are very commonly used in the Cairo AIR. They are used to ensure that the witness values are within a certain range, most commonly within a certain bit length. For example, in the previous section, we saw that a 252-bit integer is decomposed into 28 9-bit limbs, so we need to verify that each limb is in the range $0 \leq \text{limb} < 2^{9}$.
-This is done by using a preprocessed column that contains the entire range of possible values for the bit length. For example, for a 9-bit range check, the column will contain the values from 0 to $2^9 - 1$. We also have another column that contains the number of times the range-check was invoked for each valid value and we use lookups to check that each range-check is valid. For a more practical example, please refer to the [Static Lookups](../../air-development/static-lookups/index) section.
+This is done by using a preprocessed column that contains the entire range of possible values for the bit length. For example, for a 9-bit range check, the column will contain the values from 0 to $2^9 - 1$. We also have another column that contains the number of times the range-check was invoked for each valid value and we use lookups to check that each range-check is valid. For a more practical example, please refer to the [Static Lookups](../air-development/static-lookups/index) section.
diff --git a/learn/S-two-book/cairo-air/cairo/index.mdx b/learn/S-two-book/cairo-air/cairo/index.mdx
index 3d2fc30e90..4a0d33d05d 100644
--- a/learn/S-two-book/cairo-air/cairo/index.mdx
+++ b/learn/S-two-book/cairo-air/cairo/index.mdx
@@ -3,9 +3,9 @@ title: "Overview of Cairo"
---
-
+
This is an informal overview of Cairo. For a more formal explanation, please refer to the original [Cairo paper](https://eprint.iacr.org/2021/1063.pdf).
-
+
Let's start by understanding how Cairo works. Essentially, Cairo is a Turing-complete CPU architecture specifically designed to enable efficient proofs of execution using STARKs. In particular, Cairo uses a read-only memory model instead of the more common read-write memory model and does not use any general-purpose registers.
@@ -27,17 +27,18 @@ In physical CPUs, accessing memory is expensive compared to accessing registers
Let's now see what a Cairo instruction looks like.
+

-
*Figure 1: Cairo instruction (little-endian)*
+
-As the figure above from the [Cairo paper](https://eprint.iacr.org/2021/1063.pdf) shows, an instruction is 64 bits, where the first three 16-bit integers are signed offsets to the operands `dst`, `op0`, and `op1`.
+As the figure above from the Cairo paper shows, an instruction is 64 bits, where the first three 16-bit integers are signed offsets to the operands `dst`, `op0`, and `op1`.
The next 15 bits are flags. The `dst_reg` and `op0_reg` 1-bit flags indicate whether to use the `ap` or the `fp` register as the base for the `dst` and `op0` operands. The `op1_src` flag supports a wider range of base values for the `op1` operand: `op0`, `pc`, `fp`, and `ap`. The `res_logic` flag indicates how to compute the `res` operand: `op1`, `op0 + op1`, or `op0 * op1`. The `pc_update` and `ap_update` flags show how to update the `pc` and `ap` registers after computing the operands. The `opcode` flag indicates whether this instruction belongs to a predefined opcode (e.g., `CALL`, `RET`, `ASSERT_EQ`) and also defines how the `ap` and `fp` registers should be updated.
-
-For a more detailed explanation of the flags, please refer to Section 4.5 of the [Cairo paper](https://eprint.iacr.org/2021/1063.pdf).
-
+
+For a more detailed explanation of the flags, please refer to Section 4.5 of the Cairo paper.
+
Finally, the last bit is fixed to 0, but as we will see in the next section, this design has been modified in the current version of Cairo to support opcode extensions.
@@ -47,9 +48,10 @@ In Cairo, an **opcode** refers to what the instruction should do. Cairo defines
Since the 64-bit instruction structure is not flexible enough to support this extended set of opcodes, Cairo extends the instruction size to 72 bits and uses the last 9 bits as the **opcode extension** value.
+

-
*Figure 2: New instruction format with opcode extension*
+
As of [this commit](https://github.com/starkware-libs/stwo-cairo/blob/b712c77887f8f8ce0d39a8a9741221c89846836e/stwo_cairo_prover/crates/adapter/src/decode.rs#L4), the following opcode extension values are supported:
@@ -58,6 +60,6 @@ As of [this commit](https://github.com/starkware-libs/stwo-cairo/blob/b712c77887
- `2`: BlakeFinalize
- `3`: QM31Operation
-
-Even if an instruction does not belong to any predefined set of opcodes, it is considered a valid opcode as long as it adheres to the state-transition function defined in Section 4.5 of the [Cairo paper](https://eprint.iacr.org/2021/1063.pdf). In S-two Cairo, this is referred to as a _generic opcode_.
-
+
+Even if an instruction does not belong to any predefined set of opcodes, it is considered a valid opcode as long as it adheres to the state-transition function defined in Section 4.5 of the Cairo paper. In S-two Cairo, this is referred to as a _generic opcode_.
+
diff --git a/learn/S-two-book/cairo-air/index.mdx b/learn/S-two-book/cairo-air/index.mdx
index 338fe361e4..430cf97cc9 100644
--- a/learn/S-two-book/cairo-air/index.mdx
+++ b/learn/S-two-book/cairo-air/index.mdx
@@ -1,6 +1,5 @@
---
-title: "Cairo AIR"
+title: "Introduction"
---
-
The following sections cover how Cairo is expressed as an AIR and proved using S-two. The explanation is based on [this commit](https://github.com/starkware-libs/stwo-cairo/tree/b712c77887f8f8ce0d39a8a9741221c89846836e) of the S-two-Cairo repository.
diff --git a/learn/S-two-book/cairo-air/main-components/index.mdx b/learn/S-two-book/cairo-air/main-components/index.mdx
index cdd9d0865b..8bd51ed48b 100644
--- a/learn/S-two-book/cairo-air/main-components/index.mdx
+++ b/learn/S-two-book/cairo-air/main-components/index.mdx
@@ -5,17 +5,18 @@ title: "Main Components"
Now that we have a basic understanding of Cairo and the building blocks that are used to build the Cairo AIR, let's take a look at the main components of the Cairo AIR.
-
+
For readers who are unfamiliar with the concepts of components and lookups, we suggest going over the [Components](../../air-development/components/index) section of the book.
-
+
## Fetch, Decode, Execute
Cairo follows the common CPU architecture of fetching, decoding, and executing an instruction in a single CPU step. Below is a high-level diagram of what a single CPU step looks like in Cairo.
+

-
*Figure 1: A single CPU step in Cairo*
+
Since we need to prove the correctness of all CPU steps, the Cairo AIR writes the results of fetching, decoding, and executing an instruction at every CPU step into a trace and proves that the constraints over the trace are satisfied—i.e., consistent with the semantics of Cairo.
@@ -39,7 +40,7 @@ The first constraint is implemented by using a preprocessed column that contains
The second constraint is guaranteed because the `address` value is always unique.
-
+
A short explainer on how the `id` value is computed:
The `id` value is a 31-bit integer that is incremented by 1 from 0 whenever there is a unique memory access. For example, if the addresses `[5, 1523, 142]` were accessed in that order, the ids for those addresses will be `(5, 0)`, `(1523, 1)`, and `(142, 2)`.
@@ -47,15 +48,16 @@ The `id` value is a 31-bit integer that is incremented by 1 from 0 whenever ther
Since an `id` value needs to include information as to whether the corresponding `value` is `Small` or `Big`, we use the MSB as a flag (0 for `Small` and 1 for `Big`). Thus, an `id` value that corresponds to a `Small` value can occupy the space `[0, 2^30)`, while an `id` value that corresponds to a `Big` value can occupy the space `[2^30, 2^31)`.
In reality, the size of each table will be `[0, NUM_SMALL_VALUES_ACCESSED)` and `[2^30, 2^30 + NUM_BIG_VALUES_ACCESSED)`, where `NUM_SMALL_VALUES_ACCESSED` and `NUM_BIG_VALUES_ACCESSED` are values that are provided by the prover. To make sure that the `id` values are created correctly, we can use the preprocessed column as the `id` values.
-
+
## 2. VerifyInstruction Component
-The `VerifyInstruction` component is responsible for accessing the instruction from the `Memory` component and decomposing the retrieved value. As mentioned in the [Felt252 to M31](../basic-building-blocks/index.mdx#felt252-to-m31) section, a 252-bit integer is stored as 28 9-bit limbs, so we need to decompose the limbs and concatenate them to get the values we need. For example, as in [Figure 2](#fig-limb-decomposition) in order to get the 3 16-bit offset values, we need to decompose the first 6 limbs into `[9, [7, 2], [9], [5, 4], [9], [3, 6]]` and concatenate them as the following: `[[9, 7], [2, 9, 5], [4, 9, 3]]`. Then, the remaining 6-bit value and the next limb will correspond to the 15-bit flags, and the next (8th) limb will be the opcode extension value. The other 20 limbs should all be zeros. At the end, we will have decomposed the instruction value into 3 16-bit offsets, 2 chunks of flags, and a 9-bit opcode extension.
+The `VerifyInstruction` component is responsible for accessing the instruction from the `Memory` component and decomposing the retrieved value. As mentioned in the [Felt252 to M31](../basic-building-blocks/index#felt252-to-m31) section, a 252-bit integer is stored as 28 9-bit limbs, so we need to decompose the limbs and concatenate them to get the values we need. For example, as in [Figure 2](#figure-2) in order to get the 3 16-bit offset values, we need to decompose the first 6 limbs into `[9, [7, 2], [9], [5, 4], [9], [3, 6]]` and concatenate them as the following: `[[9, 7], [2, 9, 5], [4, 9, 3]]`. Then, the remaining 6-bit value and the next limb will correspond to the 15-bit flags, and the next (8th) limb will be the opcode extension value. The other 20 limbs should all be zeros. At the end, we will have decomposed the instruction value into 3 16-bit offsets, 2 chunks of flags, and a 9-bit opcode extension.
+

-
*Figure 2: Limb decomposition*
+
Note that the decomposition will be constrained by range checking that each integer is within its corresponding range.
@@ -85,9 +87,10 @@ On the left-hand side, we can see lookups that are computed directly by the veri
Each component has a **claimed sum** value that corresponds to the sum of all the lookups in the component. These claimed sums are provided by the prover as part of the proof, and the verifier adds them together along with the lookups on the left-hand side. If the total sum is zero, the verifier is convinced that the proof is valid.
+

-
*Figure 3: Lookups between the main components*
+
### Memory Lookups
diff --git a/learn/S-two-book/how-it-works/air/components.mdx b/learn/S-two-book/how-it-works/air/components.mdx
index 1a2af5013b..4cbe3c54fe 100644
--- a/learn/S-two-book/how-it-works/air/components.mdx
+++ b/learn/S-two-book/how-it-works/air/components.mdx
@@ -3,7 +3,7 @@ title: "Components"
---
-This section examines how the components are implemented and covers some important functions defined on them, without diving into AIR-specific details since they are already covered in the [earlier sections](../../air-development/index).
+This section examines how the components are implemented and covers some important functions defined on them, without diving into AIR-specific details since they are already covered in the [earlier sections](/learn/S-two-book/air-development/index).
The `Components` struct is a collection of components, implemented as follows:
```rust
@@ -13,7 +13,7 @@ pub struct Components<'a> {
}
```
-Here, `components` is a collection of objects that implement the `Component` trait, and `n_preprocessed_columns` is the total number of preprocessed columns used across all components (refer to [Preprocessed Trace](../../air-development/preprocessed-trace/index)).
+Here, `components` is a collection of objects that implement the `Component` trait, and `n_preprocessed_columns` is the total number of preprocessed columns used across all components (refer to [Preprocessed Trace](/learn/S-two-book/air-development/preprocessed-trace/index)).
The `Component` trait represents a trace table along with a set of constraints. It implements the following functions:
@@ -59,7 +59,7 @@ The `composition_log_degree_bound` function determines the log of the degree of
}
```
-For each component, this calls the `max_constraint_log_degree_bound()` function, which returns the log of the highest polynomial degree among all constraints in that component. It then takes the maximum value across all components. For the [example AIR containing two components](./overview.mdx#air-to-composition-polynomial), this function returns $\max({\log{(\deg{(p_0)})}, \log{(\deg{(p_1)})}})$.
+For each component, this calls the `max_constraint_log_degree_bound()` function, which returns the log of the highest polynomial degree among all constraints in that component. It then takes the maximum value across all components. For the [example AIR containing two components](/learn/S-two-book/how-it-works/air/overview#air-to-composition-polynomial), this function returns $\max({\log{(\deg{(p_0)})}, \log{(\deg{(p_1)})}})$.
## Mask Points
The `mask_points` function determines all evaluation points needed to verify constraints at a given point
@@ -123,7 +123,7 @@ The inputs to this function are as follows:
- `mask_values`: The evaluations of the polynomials at the mask points that were previously determined by the `mask_points` function. These provide the constraint polynomial values needed to compute the composition polynomial at the input `point`.
- `random_coeff`: An element from the `SecureField` (i.e. $\mathsf{QM31}$). In the example, this is represented as $\gamma$, which is used to compose all constraints into a single composition polynomial.
-The function body operates as follows. First, an `evaluation_accumulator` is instantiated. Then, for each component, the evaluation of the component-level quotient is added to the `evaluation_accumulator`. For the [example AIR containing two components](./overview.mdx#air-to-composition-polynomial), this adds the evaluations of component-level quotients $q_0$ and $q_1$ at the input `point` to the `evaluation_accumulator`. Finally, the `finalize()` function is called on the `evaluation_accumulator`, which outputs the random linear combination evaluated at the input `point`.
+The function body operates as follows. First, an `evaluation_accumulator` is instantiated. Then, for each component, the evaluation of the component-level quotient is added to the `evaluation_accumulator`. For the [example AIR containing two components](/learn/S-two-book/how-it-works/air/overview#air-to-composition-polynomial), this adds the evaluations of component-level quotients $q_0$ and $q_1$ at the input `point` to the `evaluation_accumulator`. Finally, the `finalize()` function is called on the `evaluation_accumulator`, which outputs the random linear combination evaluated at the input `point`.
$$
q = q_0 + \gamma^{c_0} \cdot q_1
$$
diff --git a/learn/S-two-book/how-it-works/air/index.mdx b/learn/S-two-book/how-it-works/air/index.mdx
index 231f4ad2ff..1c15c16e2a 100644
--- a/learn/S-two-book/how-it-works/air/index.mdx
+++ b/learn/S-two-book/how-it-works/air/index.mdx
@@ -1,5 +1,5 @@
---
-title: "AIR to Composition Polynomial"
+title: "Introduction"
---
@@ -7,10 +7,10 @@ title: "AIR to Composition Polynomial"
This section is organized as follows:
-- **[Technical Overview](./overview)**: Provides a high-level explanation of how multiple components are combined into a composition polynomial.
+- **[Technical Overview](/learn/S-two-book/how-it-works/air/overview)**: Provides a high-level explanation of how multiple components are combined into a composition polynomial.
-- **[Components](./components)**: Examines the implementation of the `Components` struct and key functions like `mask_points` and `eval_composition_polynomial_at_point`.
+- **[Components](/learn/S-two-book/how-it-works/air/components)**: Examines the implementation of the `Components` struct and key functions like `mask_points` and `eval_composition_polynomial_at_point`.
-- **[Prover Components](./prover_components)**: Details the `ComponentProvers` struct and its specialized functions for computing composition polynomials during the proving process, including the main `compute_composition_polynomial` function.
+- **[Prover Components](/learn/S-two-book/how-it-works/air/prover_components)**: Details the `ComponentProvers` struct and its specialized functions for computing composition polynomials during the proving process, including the main `compute_composition_polynomial` function.
diff --git a/learn/S-two-book/how-it-works/air/overview.mdx b/learn/S-two-book/how-it-works/air/overview.mdx
index b2bd86d1f8..d0ba93fec7 100644
--- a/learn/S-two-book/how-it-works/air/overview.mdx
+++ b/learn/S-two-book/how-it-works/air/overview.mdx
@@ -3,11 +3,11 @@ title: "Technical Overview"
---
-As shown in an earlier section (refer to [Components](../../air-development/components/index)), S-two represents the trace using multiple tables where each table is referred to as a _component_. An AIR in S-two is a collection of multiple components. Components can interact with one another, and the consistency of these interactions is verified using [_logUp_](https://eprint.iacr.org/2022/1530.pdf).
+As shown in an earlier section (refer to [Components](/learn/S-two-book/air-development/components/index)), S-two represents the trace using multiple tables where each table is referred to as a _component_. An AIR in S-two is a collection of multiple components. Components can interact with one another, and the consistency of these interactions is verified using [_logUp_](https://eprint.iacr.org/2022/1530.pdf).
-For example, in the [hash function example](../../air-development/components/index.mdx#hash-function-example), the scheduling component and computing component interact with each other, where both the components lookup the input and output pair. The consistency of this interaction is then verified by adding logUp constraints to each component.
+For example, in the [hash function example](/learn/S-two-book/air-development/components/index#hash-function-example), the scheduling component and computing component interact with each other, where both the components lookup the input and output pair. The consistency of this interaction is then verified by adding logUp constraints to each component.
-Each component consists of a trace table of a specific height along with a set of constraints. These constraints include computation constraints as well as lookup constraints (refer to [Lookups](../lookups)).
+Each component consists of a trace table of a specific height along with a set of constraints. These constraints include computation constraints as well as lookup constraints (refer to [Lookups](/learn/S-two-book/how-it-works/lookups)).
This section provides an overview of how these components are converted into a composition polynomial. This composition polynomial is then used by a polynomial commitment scheme to commit and generate evaluation proofs.
@@ -15,9 +15,9 @@ This section provides an overview of how these components are converted into a c
This section explains how a composition polynomial is computed using an example.
-Consider an AIR composed of two components with trace tables: $\mathscr{T}_0$ and $\mathscr{T}_1$. The component table $\mathscr{T}_0$ is defined over the trace domain $D_{n_0}$, which is a [canonic coset](../circle-group.mdx#canonic-coset) of size $N_0 = 2^{n_0}$. Similarly, component table $\mathscr{T}_1$ is defined over the trace domain $D_{n_1}$ of size $N_1 = 2^{n_1}$.
+Consider an AIR composed of two components with trace tables: $\mathscr{T}_0$ and $\mathscr{T}_1$. The component table $\mathscr{T}_0$ is defined over the trace domain $D_{n_0}$, which is a [canonic coset](/learn/S-two-book/how-it-works/circle-group#canonic-coset) of size $N_0 = 2^{n_0}$. Similarly, component table $\mathscr{T}_1$ is defined over the trace domain $D_{n_1}$ of size $N_1 = 2^{n_1}$.
-The prover interpolates the trace polynomials for each component and then evaluates the trace polynomials over an evaluation domain that is a blowup factor $B = 2^\beta$ times larger than the trace domain. Thus, the evaluation domain for component table $\mathscr{T}_0$ is $D_{n_0 + \beta}$ of size $2^{n_0 + \beta}$. Similarly, the evaluation domain for component table $\mathscr{T}_1$ is $D_{n_1 + \beta}$ of size $2^{n_1 + \beta}$. Both interpolation and evaluation use [circle FFT](../circle-fft/index). The prover then commits to the evaluations of trace polynomials from all components using a single [Merkle tree](../vcs/index).
+The prover interpolates the trace polynomials for each component and then evaluates the trace polynomials over an evaluation domain that is a blowup factor $B = 2^\beta$ times larger than the trace domain. Thus, the evaluation domain for component table $\mathscr{T}_0$ is $D_{n_0 + \beta}$ of size $2^{n_0 + \beta}$. Similarly, the evaluation domain for component table $\mathscr{T}_1$ is $D_{n_1 + \beta}$ of size $2^{n_1 + \beta}$. Both interpolation and evaluation use [circle FFT](/learn/S-two-book/how-it-works/circle-fft/index). The prover then commits to the evaluations of trace polynomials from all components using a single [Merkle tree](/learn/S-two-book/how-it-works/vcs/index).
Both component tables define computation constraints and lookup constraints for their specific component. The component constraints are proven table-wise, each with its separate domain quotient, which are then combined into a single cross-domain composition polynomial. Suppose the component tables $\mathscr{T}_0$ and $\mathscr{T}_1$ have a total of $c_0$ and $c_1$ constraints, respectively.
diff --git a/learn/S-two-book/how-it-works/air/prover_components.mdx b/learn/S-two-book/how-it-works/air/prover_components.mdx
index 4ec48ce40a..cccf1110fb 100644
--- a/learn/S-two-book/how-it-works/air/prover_components.mdx
+++ b/learn/S-two-book/how-it-works/air/prover_components.mdx
@@ -58,7 +58,7 @@ The main function defined on the `ComponentProvers` struct to compute the compos
}
```
-Let us examine the above function for our [example AIR containing two components](./overview.mdx#air-to-composition-polynomial). It takes the following three inputs:
+Let us examine the above function for our [example AIR containing two components](/learn/S-two-book/how-it-works/air/overview#air-to-composition-polynomial). It takes the following three inputs:
- `&self`: This is the `ComponentProvers` on which the function is called.
- `random_coeff`: This is an element from the `SecureField` (i.e. $\mathsf{QM31}$). In our example, this is represented as $\gamma$.
- `trace`: The `Trace` struct which contains all the polynomials that make up the entire trace including all the components. For efficiency, it stores each polynomial in both coefficients and evaluations form.
@@ -77,6 +77,6 @@ For each component, we call `evaluate_constraint_quotients_on_domain`, which com
After adding all component quotient evaluations to the accumulator, we call the `finalize()` function, which:
1. Combines the accumulated evaluations at different domain sizes to compute the evaluations of the quotient composition polynomial $q$ over the domain $D_{n + \beta}$ where $n = \max{(n_1, n_2)}$.
-2. Interpolates $q$ over $D_{n + \beta}$ using [circle FFT](../circle-fft/index) to convert it into coefficient representation.
+2. Interpolates $q$ over $D_{n + \beta}$ using [circle FFT](/learn/S-two-book/how-it-works/circle-fft/index) to convert it into coefficient representation.
-Note that the output is a [`SecureCirclePoly`](../circle-polynomials/secure-evals-and-poly.mdx#secure-circle-polynomials) since the evaluations of $q$ are in the secure field $\mathsf{QM31}$ (as $\gamma$ is randomly sampled from $\mathsf{QM31}$).
\ No newline at end of file
+Note that the output is a [`SecureCirclePoly`](/learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly#secure-circle-polynomials) since the evaluations of $q$ are in the secure field $\mathsf{QM31}$ (as $\gamma$ is randomly sampled from $\mathsf{QM31}$).
\ No newline at end of file
diff --git a/learn/S-two-book/how-it-works/circle-fft/algorithm.mdx b/learn/S-two-book/how-it-works/circle-fft/algorithm.mdx
index f7d751eca7..68bf0d1bde 100644
--- a/learn/S-two-book/how-it-works/circle-fft/algorithm.mdx
+++ b/learn/S-two-book/how-it-works/circle-fft/algorithm.mdx
@@ -9,33 +9,39 @@ Circle FFT follows a divide-and-conquer strategy, as in the classical Cooley–T
## Sequence of Domains for Circle FFT
-In Circle FFT, we use a 2-to-1 map to halve the domain size at each recursive layer. The domain used here is the [circle domain](../circle-group.mdx#circle-domain) $D_n$ of size $|D_n| = 2^n$.
+In Circle FFT, we use a 2-to-1 map to halve the domain size at each recursive layer. The domain used here is the [circle domain](/learn/S-two-book/how-it-works/circle-group#circle-domain) $D_n$ of size $|D_n| = 2^n$.
$$D_n = q + \langle g_{n-1} \rangle \cup -q + \langle g_{n-1} \rangle$$
This section describes two specific 2-to-1 maps that are central to the Circle FFT construction:
1. **Projection map $\pi_x$**: The projection map $\pi_x$ projects the points $(x,y)$ and $(x,-y)$ to the same $x$-coordinate i.e.
+
$$\pi_x: D_n \rightarrow S_n, \quad \pi_x((x, y)) = x$$
+
where $S_n$ is the set containing all the $x$-coordinates from $D_n$. This can be interpreted as saying that two points are considered equivalent if they differ only by sign. Since $\pi_x$ maps two points from $D_n$ to a single element in $S_n$, the size of $S_n$ will be half the size of $D_n$.
2. **Squaring map $\pi$**: The squaring map $\pi$ is a 2-to-1 map defined by:
+
$$\pi: S_n \rightarrow S_{n-1}, \quad \pi(x) = 2x^2 - 1$$
+
This is obtained using the doubling map and the equality $y^2 = 1 - x^2$ to compute the $x$-coordinate:
$$\pi(x, y) = (x, y) + (x, y) = (x^2 - y^2, 2xy) = (2x^2-1, 2xy)$$
In Circle FFT, we use both the projection map $\pi_x$ and the squaring map $\pi$ to construct the sequence of domains. The sequence of domains for Circle FFT is shown as follows:
+
$$D_n \xrightarrow{\pi_x} S_n \xrightarrow{\pi} S_{n-1} \xrightarrow{\pi} \cdots \xrightarrow{\pi} S_1$$
Now that we know the sequence of domains, let us dive into the Circle FFT algorithm.
## Circle FFT
-The Circle FFT interpolates a bivariate polynomial $p(x,y)$ from the [polynomial space](../circle-polynomials/evals-and-poly.mdx#polynomials-over-the-circle) $L_N(F)$, given the evaluations over a circle domain $D_n$ of size $N=2^n$. The algorithm is a three step process described as follows.
+The Circle FFT interpolates a bivariate polynomial $p(x,y)$ from the [polynomial space](/learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly#polynomials-over-the-circle) $L_N(F)$, given the evaluations over a circle domain $D_n$ of size $N=2^n$. The algorithm is a three step process described as follows.
#### Step 1: Decompose $p(x, y)$ into sub-polynomials
In the first step, we decompose the bivariate polynomial $p(x, y)$ over $D_n$ using the projection map $\pi_x$ into two univariate polynomials $p_0(x)$ and $p_1(x)$ as follows:
+
$$p(x, y) = p_0(\pi_x(x, y)) + y \cdot p_1(\pi_x(x, y)) = p_0(x) + y \cdot p_1(x)$$
Now to continue with the FFT algorithm, we want to compute the evaluations of $p_0(x)$ and $p_1(x)$ over the smaller domain $S_n = \pi_x(D_n)$, given the evaluations of $p(x, y)$ over $D_n$. This is done using the following equations:
@@ -52,20 +58,50 @@ To compute the evaluations of $p_1(x)$ over the domain $S_n$, we subtract the ev
> **Example**: To make all the calculations easier we will work on a concrete example over the Mersenne prime $p = 2^5 - 1 = 31$. Thus all calculations are over $\mathbb{F}_{31}$, i.e, modulo $31$. We are given the evaluations $\vec{v} = [13, 16, 9, 30, 29, 27, 13, 21]$ of $p(x, y)$ over the circle domain
+>
+>
+>
> $$D_3 = [(7, 18), (13, 7), (24, 13), (18, 24), (7, 13), (13, 24), (24, 18), (18, 7)]$$
+>
+>
+>
> and we want to compute the coefficients of $p(x, y)$ i.e. interpolate $p(x, y)$ given its evaluations $\vec{v}$ over $D_3$.
>
-> **Step 1**: . First, decompose the polynomial $p(x, y)$ using the map $\pi_x(x, y) = x$:
+>
+>
+>
+> **Step 1**: First, decompose the polynomial $p(x, y)$ using the map $\pi_x(x, y) = x$:
+>
+>
>
> $$p(x, y) = p_0(\pi_x(x, y)) + y \cdot p_1(\pi_x(x, y)) = p_0(x) + y \cdot p_1(x)$$
>
+>
+>
> Given the evaluations $\vec{v}$ of $p(x, y)$ over the circle domain $D_3$ we aim to compute the evaluations of $p_0$ and $p_1$ over the smaller domain $S_3$:
+>
+>
+>
> $$S_3 = \pi_x(D_3) = [7, 13, 24, 18]$$
+>
+>
+>
> For $(x, y)=(7, 18)$ and $(x, -y) = (7, 13)$ in $D_3$, substituting them into the following equations give:
+>
+>
+>
> $$p_0(x) = \frac{p(x, y) + p(x, -y)}{2} = \frac{p(7, 18) + p(7, 13)}{2} = \frac{13 + 29}{2} = 21$$
+>
+>
+>
> $$p_1(x) = \frac{p(x, y) - p(x, -y)}{2 \cdot y} = \frac{p(7, 18) - p(7, 13)}{2 \cdot 18} = \frac{13 - 29}{2 \cdot 18} = 3$$
>
+>
+>
> Repeating this process for other pairs $(x, y)$ and $(x, -y)$ in $D_3$, we obtain:
+>
+>
+>
> $$\vec{v_0} = [21, 6, 11, 10], \quad \vec{v_1} = [3, 28, 7, 6]$$
#### Step 2: Recursively apply FFT to sub-polynomials
@@ -93,15 +129,33 @@ Similar to _circle twiddles_, to compute the evaluations of $p_{01}(x)$ over $S_
> **Step 2**: Given the evaluations of $p_0$ and $p_1$ over $S_3 = [7, 13, 24, 18]$:
+>
+>
+>
> $$\vec{v_0} = [21, 6, 11, 10], \quad \vec{v_1} = [3, 28, 7, 6]$$
+>
+>
+>
> we recursively apply the FFT algorithm to compute the coefficients of $p_0$ and $p_1$.
>
+>
+>
> At each layer, we decompose the polynomial using the polynomial map $\pi(x)$. For example, the decomposition of $p_0(x)$ is:
>
+>
+>
> $$p_0(x) = p_{00}(\pi(x)) + x \cdot p_{01}(\pi(x))$$
>
+>
+>
> Omitting the intermediate steps of the recursive calls, we eventually obtain the coefficients of $p_0(x)$ and $p_1(x)$ as follows:
+>
+>
+>
> $$p_0(x) = 12 + 26 \cdot x + \pi(x) + 28 \cdot x \cdot \pi(x)$$
+>
+>
+>
> $$p_1(x) = 11 + 26 \cdot x + 14 \cdot \pi(x) + 20 \cdot x \cdot \pi(x)$$
#### Step 3: Combine the coefficients
@@ -110,14 +164,37 @@ Finally, we combine the coefficients of $p_0(x)$ and $p_1(x)$ to compute the coe
$$p(x, y) = p_0(x) + y \cdot p_1(x)$$
> **Step 3**: Given the coefficients of $p_0(x)$ and $p_1(x)$:
+>
+>
+>
> $$p_0(x) = 12 + 26 \cdot x + \pi(x) + 28 \cdot x \cdot \pi(x)$$
+>
+>
+>
> $$p_1(x) = 11 + 26 \cdot x + 14 \cdot \pi(x) + 20 \cdot x \cdot \pi(x)$$
+>
+>
+>
> we reconstruct the original polynomial $p(x, y)$ using the decomposition:
+>
+>
+>
> $$p(x, y) = p_0(x) + y \cdot p_1(x)$$
>
+>
+>
> Substituting the expressions for $p_0$ and $p_1$, we get:
+>
+>
+>
> $$p(x, y) = 12 + 26 \cdot x + \pi(x) + 28 \cdot x \cdot \pi(x) + y \cdot (11 + 26 \cdot x + 14 \cdot \pi(x) + 20 \cdot x \cdot \pi(x))$$
+>
+>
+>
> $$p(x, y) = 12 + 26 \cdot x + \pi(x) + 28 \cdot x \cdot \pi(x) + \quad \quad \quad \quad \quad \quad$$
+>
+>
+>
> $$\quad \quad \quad \quad \quad \quad 11 \cdot y + 26 \cdot x \cdot y + 14 \cdot y \cdot \pi(x) + 20 \cdot x \cdot y \cdot \pi(x)$$
This completes an overview of the interpolation algorithm using Circle FFT. In the next section, we will see how the twiddle values are computed and stored for Circle FFT.
diff --git a/learn/S-two-book/how-it-works/circle-fft/basis.mdx b/learn/S-two-book/how-it-works/circle-fft/basis.mdx
index 9c07a9df1e..4d53a54695 100644
--- a/learn/S-two-book/how-it-works/circle-fft/basis.mdx
+++ b/learn/S-two-book/how-it-works/circle-fft/basis.mdx
@@ -7,7 +7,7 @@ The circle FFT algorithm outputs coefficients $c_j$ with respect to some basis $
$$p(x, y) = \sum_{j = 0}^{2^n - 1} c_j \cdot b_j^{(n)}(x, y)$$
-In the concrete example (covered in the [Algorithm section](./algorithm)) of Circle FFT over twin-coset $D_3$, we saw that the algorithm computed the coefficient with respect to the following basis:
+In the concrete example (covered in the [Algorithm section](/learn/S-two-book/how-it-works/circle-fft/algorithm)) of Circle FFT over twin-coset $D_3$, we saw that the algorithm computed the coefficient with respect to the following basis:
$$b^{(3)}_j(x, y) = [1, x, \pi(x), x \cdot \pi(x), y, y \cdot x, y \cdot \pi(x), y \cdot x \cdot \pi(x)]$$
@@ -21,7 +21,7 @@ $$j = j_0 + j_1 \cdot 2 + \cdots + j_{n-1} \cdot 2^{n-1}$$
# Dimension Gap
-We will now explore the space of polynomials interpolated by the Circle FFT algorithm. Let the space spanned by the basis polynomials in $b^{(n)}(x, y)$ be $L'_N(F)$. The basis $b^{(n)}(x, y)$ has a total of $N=2^n$ elements and thus the dimension of the space $L'_N(F)$ is $N$. However, the space of bivariate polynomials over the circle curve is $L_N(F)$, which has dimension $N+1$ (as covered in the [section on polynomials over the circle](../circle-polynomials/evals-and-poly.mdx#polynomials-over-the-circle)).
+We will now explore the space of polynomials interpolated by the Circle FFT algorithm. Let the space spanned by the basis polynomials in $b^{(n)}(x, y)$ be $L'_N(F)$. The basis $b^{(n)}(x, y)$ has a total of $N=2^n$ elements and thus the dimension of the space $L'_N(F)$ is $N$. However, the space of bivariate polynomials over the circle curve is $L_N(F)$, which has dimension $N+1$ (as covered in the [section on polynomials over the circle](/learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly#polynomials-over-the-circle)).
We can identify the missing highest total degree element in $L'_N(F)$ by examining the basis. The highest total degree element in basis $b^{(n)}(x, y)$ is:
$$y \cdot x \cdot \pi(x) \cdot \pi^2(x) \cdot \pi^3(x) \cdots \pi^{n-2}(x)$$
diff --git a/learn/S-two-book/how-it-works/circle-fft/index.mdx b/learn/S-two-book/how-it-works/circle-fft/index.mdx
index b7892eb1a9..4bd9ee35af 100644
--- a/learn/S-two-book/how-it-works/circle-fft/index.mdx
+++ b/learn/S-two-book/how-it-works/circle-fft/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Circle FFT"
+title: "Introduction"
---
@@ -7,7 +7,7 @@ title: "Circle FFT"
This section is organized as follows:
-- **[Algorithm](./algorithm)**: Overview of the Circle FFT algorithm with concrete examples showing the three-step interpolation process.
-- **[Twiddles](./twiddles)**: Precomputation and storage of twiddle values required for efficient FFT operations.
-- **[Interpolate](./interpolation)**: Detailed implementation walkthrough of the interpolation function with code breakdown.
-- **[Basis and Dimension Gap](./basis)**: FFT basis for Circle FFT and analysis of the dimension gap in circle polynomial spaces.
\ No newline at end of file
+- **[Algorithm](/learn/S-two-book/how-it-works/circle-fft/algorithm)**: Overview of the Circle FFT algorithm with concrete examples showing the three-step interpolation process.
+- **[Twiddles](/learn/S-two-book/how-it-works/circle-fft/twiddles)**: Precomputation and storage of twiddle values required for efficient FFT operations.
+- **[Interpolate](/learn/S-two-book/how-it-works/circle-fft/interpolation)**: Detailed implementation walkthrough of the interpolation function with code breakdown.
+- **[Basis and Dimension Gap](/learn/S-two-book/how-it-works/circle-fft/basis)**: FFT basis for Circle FFT and analysis of the dimension gap in circle polynomial spaces.
diff --git a/learn/S-two-book/how-it-works/circle-fft/interpolation.mdx b/learn/S-two-book/how-it-works/circle-fft/interpolation.mdx
index 394ea6ad2d..d3b9fd7a74 100644
--- a/learn/S-two-book/how-it-works/circle-fft/interpolation.mdx
+++ b/learn/S-two-book/how-it-works/circle-fft/interpolation.mdx
@@ -71,7 +71,7 @@ The function includes hardcoded implementations for circle domains of small size
- The `circle_twiddles` are twiddles required at the first layer where all points are projected to the $x$-axis. These correspond to the $y$-coordinate points in the half coset.
- The `line_twiddles` are twiddles required to compute FFT at the subsequent recursive layers where the squaring map $\pi$ is used as the 2-to-1 map.
- > For the example of the [half coset](./twiddles.mdx#fig-half-coset), there are three FFT layers: one layer with the projection map $\pi_x$ and two recursive layers with the squaring map $\pi$.
+ > For the example of the [half coset](/learn/S-two-book/how-it-works/circle-fft/twiddles#fig-half-coset), there are three FFT layers: one layer with the projection map $\pi_x$ and two recursive layers with the squaring map $\pi$.
>
> The `line_twiddles` for the two recursive layers are computed using the inverse twiddles $[a^{-1}, b^{-1}, \pi(a)^{-1}, 1]$. For the first recursive layer, the twiddles are the inverse of the $x$-coordinates: $[a^{-1}, b^{-1}]$. For the second recursive layer, the twiddles are the inverse of the square of the $x$-coordinates: $[\pi(a)^{-1}]$. Thus for this example, the `line_twiddles` are $[[a^{-1}, b^{-1}], [\pi(a)^{-1}]]$.
diff --git a/learn/S-two-book/how-it-works/circle-fft/twiddles.mdx b/learn/S-two-book/how-it-works/circle-fft/twiddles.mdx
index 52d063269b..485d87886d 100644
--- a/learn/S-two-book/how-it-works/circle-fft/twiddles.mdx
+++ b/learn/S-two-book/how-it-works/circle-fft/twiddles.mdx
@@ -27,7 +27,7 @@ For `CpuBackend`, `B::Twiddles` is a vector of `BaseField` elements. Here, `root
Now we will understand how the twiddle tree is computed using an example. Consider the following half coset $q + \langle g_2 \rangle$ of a circle domain $D_3$.
- 
+ 
*Figure 1: Half Coset of size 4*
diff --git a/learn/S-two-book/how-it-works/circle-fri/fri_prover.mdx b/learn/S-two-book/how-it-works/circle-fri/fri_prover.mdx
index 88df4ff41c..8aac1a19fe 100644
--- a/learn/S-two-book/how-it-works/circle-fri/fri_prover.mdx
+++ b/learn/S-two-book/how-it-works/circle-fri/fri_prover.mdx
@@ -30,7 +30,7 @@ We calculate the security bits of our protocol as follows:
self.log_blowup_factor * self.n_queries as u32
}
```
-This is as we discussed in the [Security Analysis section](./overview.mdx#security-analysis).
+This is as we discussed in the [Security Analysis section](/learn/S-two-book/how-it-works/circle-fri/overview#security-analysis).
## Proving
@@ -44,10 +44,10 @@ pub struct FriProver<'a, B: FriOps + MerkleOps, MC: MerkleChannel> {
}
```
-Here, `FriOps` is a trait which implements functionality for the commit phase of FRI, such as folding the evaluations, and `MerkleOps` is the trait used in the [Merkle commitment scheme](../vcs/hash_functions.mdx#merkleops-trait). The generic `B` refers to a specific backend, for example either `CpuBackend` or `SimdBackend`, which implements the `FriOps` and `MerkleOps` traits.
+Here, `FriOps` is a trait which implements functionality for the commit phase of FRI, such as folding the evaluations, and `MerkleOps` is the trait used in the [Merkle commitment scheme](/learn/S-two-book/how-it-works/vcs/hash_functions#merkleops-trait). The generic `B` refers to a specific backend, for example either `CpuBackend` or `SimdBackend`, which implements the `FriOps` and `MerkleOps` traits.
-We described [FRI](./overview.mdx#protocol) as an interactive protocol between the prover and the verifier. To make the protocol non-interactive, we use the [Fiat-Shamir transform](https://en.wikipedia.org/wiki/Fiat%E2%80%93Shamir_heuristic), where both the prover and verifier use a channel to hash the transcript and generate random challenges. These functionalities are defined by the `MerkleChannel` trait. In the non-interactive protocol, oracles to functions are replaced by Merkle commitments to their evaluations, and queries to the oracle by the verifier are replaced by Merkle decommitments, which the prover appends to the channel.
+We described [FRI](/learn/S-two-book/how-it-works/circle-fri/overview#protocol) as an interactive protocol between the prover and the verifier. To make the protocol non-interactive, we use the [Fiat-Shamir transform](https://en.wikipedia.org/wiki/Fiat%E2%80%93Shamir_heuristic), where both the prover and verifier use a channel to hash the transcript and generate random challenges. These functionalities are defined by the `MerkleChannel` trait. In the non-interactive protocol, oracles to functions are replaced by Merkle commitments to their evaluations, and queries to the oracle by the verifier are replaced by Merkle decommitments, which the prover appends to the channel.
The `FRIProver` struct is composed of several layers. Each layer contains a Merkle tree that commits to the evaluations of a polynomial for that layer. The main components are:
@@ -56,31 +56,36 @@ The `FRIProver` struct is composed of several layers. Each layer contains a Merk
**• `first_layer`**: The first layer of the FRI protocol, containing the commitment to the initial set of columns.
- ```rust,no_run,noplayground
```rust
-// Source: https://raw.githubusercontent.com/starkware-libs/stwo/0790eba46b8af5697083d84fb75bd34b08a0b31f/crates/stwo/src/prover/fri.rs#L278-L281
-// (External snippet not embedded)
-```
+ struct FriFirstLayerProver<'a, B: FriOps + MerkleOps, H: MerkleHasher> {
+ columns: &'a [SecureEvaluation],
+ merkle_tree: MerkleProver,
+ }
```
For example, the `columns` are the array of evaluations $[h_{0}, h_{1}, h_{2}]$, and `merkle_tree` commits to $h_{0} \in F^{H_0}$, $h_{1} \in F^{H_1}$, and $h_{2} \in F^{H_2}$ using a single Merkle tree.
**• `inner_layers`**: The inner layers of FRI, each representing a folding round and its corresponding Merkle commitment.
- ```rust,no_run,noplayground
```rust
-// Source: https://raw.githubusercontent.com/starkware-libs/stwo/0790eba46b8af5697083d84fb75bd34b08a0b31f/crates/stwo/src/prover/fri.rs#L363-L366
-// (External snippet not embedded)
-```
+ struct FriInnerLayerProver, H: MerkleHasher> {
+ evaluation: LineEvaluation,
+ merkle_tree: MerkleProver,
+ }
```
In our example, there are two FRI inner layers: the first contains evaluations of $g_0$ over the "line" domain $F^{I_0}$ with a Merkle commitment to $g_0$, and the second contains evaluations of $g_1$ over $F^{I_1}$ with its Merkle commitment.
**• `last_layer_poly`**: The last layer polynomial, which the prover sends in clear to the verifier.
- ```rust,no_run,noplayground
```rust
-// Source: https://raw.githubusercontent.com/starkware-libs/stwo/0790eba46b8af5697083d84fb75bd34b08a0b31f/crates/stwo/src/core/poly/line.rs#L108-L116
-// (External snippet not embedded)
-```
+ pub struct LinePoly {
+ /// Coefficients of the polynomial in [`line_ifft`] algorithm's basis.
+ ///
+ /// The coefficients are stored in bit-reversed order.
+ #[allow(rustdoc::private_intra_doc_links)]
+ coeffs: Vec,
+ /// The number of coefficients stored as `log2(len(coeffs))`.
+ log_size: u32,
+ }
```
For our example, this is the polynomial $g_2$ in coefficient representation.
@@ -122,10 +127,10 @@ It takes the following inputs:
- `channel`: The Merkle channel used for the Fiat-Shamir transform to generate random challenges and maintain the transcript.
- `config`: The `FriConfig` containing protocol parameters.
- `columns`: The array of evaluations of the functions. For our example, this will contain $[h_0, h_1, h_2]$ over their respective domains $[H_0, H_1, H_2]$.
-- `twiddles`: The [precomputed twiddle](../circle-fft/twiddles.mdx#twiddle-tree) values needed for folding.
+- `twiddles`: The [precomputed twiddle](/learn/S-two-book/how-it-works/circle-fft/twiddles#twiddle-tree) values needed for folding.
-The commitment phase consists of the following steps, corresponding to the protocol rounds described in the [overview](./overview.mdx#protocol):
+The commitment phase consists of the following steps, corresponding to the protocol rounds described in the [overview](/learn/S-two-book/how-it-works/circle-fri/overview#protocol):
1. **First Layer Commitment** (`commit_first_layer`):
- Takes the input functions $[h_0, h_1, h_2]$ and creates a Merkle commitment to all of them using a single Merkle tree.
@@ -237,4 +242,4 @@ The function `decommit_on_queries` generates the `FriProof` struct by decommitti
2. **Process Inner Layers with Folding**: We process the decommitment layer by layer. For our example, this proceeds as follows:
- For the first inner layer: Provide the evaluation $g_0(x_0), g_0(-x_0)$ along with their Merkle decommitments.
- For the second inner layer: Provide the evaluation $g_1(x_1), g_1(-x_1)$ along with their Merkle decommitments.
-3. **Assemble Final Proof**: Combines all layer decommitments with the last layer polynomial $g_2$ into `FriProof`.
\ No newline at end of file
+3. **Assemble Final Proof**: Combines all layer decommitments with the last layer polynomial $g_2$ into `FriProof`.
diff --git a/learn/S-two-book/how-it-works/circle-fri/index.mdx b/learn/S-two-book/how-it-works/circle-fri/index.mdx
index 1d6cb6b87a..6b5a40fb0d 100644
--- a/learn/S-two-book/how-it-works/circle-fri/index.mdx
+++ b/learn/S-two-book/how-it-works/circle-fri/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Circle FRI"
+title: "Introduction"
---
@@ -7,7 +7,6 @@ title: "Circle FRI"
This section is organized as follows:
-- **[Technical Overview](./overview)**: Explains the mathematical foundations and protocol steps of Circle FRI, including a multi-table example and security analysis.
-- **[FRI Prover](./fri_prover)**: Describes the implementation of the prover, including protocol configuration, commitment, and decommitment phases.
-- **[FRI Verifier](./fri_verifier)**: Details the verifier's implementation, covering query generation and verification logic.
-
+- **[Technical Overview](/learn/S-two-book/how-it-works/circle-fri/overview)**: Explains the mathematical foundations and protocol steps of Circle FRI, including a multi-table example and security analysis.
+- **[FRI Prover](/learn/S-two-book/how-it-works/circle-fri/fri_prover)**: Describes the implementation of the prover, including protocol configuration, commitment, and decommitment phases.
+- **[FRI Verifier](/learn/S-two-book/how-it-works/circle-fri/fri_verifier)**: Details the verifier's implementation, covering query generation and verification logic.
diff --git a/learn/S-two-book/how-it-works/circle-fri/overview.mdx b/learn/S-two-book/how-it-works/circle-fri/overview.mdx
index 30b109fe2b..5ee484f8e1 100644
--- a/learn/S-two-book/how-it-works/circle-fri/overview.mdx
+++ b/learn/S-two-book/how-it-works/circle-fri/overview.mdx
@@ -11,7 +11,7 @@ This section does not provide a detailed analysis of how FRI works, but instead
## Introduction
-Circle FRI tests the proximity of a function $f \in F^D$ over a [canonical coset](../circle-group.mdx#canonic-coset) $D$ of size $|D|=2^{n + \beta}$ to some polynomial $p(x, y)$ from the [polynomial space](../circle-fft/basis.mdx#dimension-gap) $L'_n(F)$. The function $f \in F^D$ is represented as a vector of evaluations over the domain $D$.
+Circle FRI tests the proximity of a function $f \in F^D$ over a [canonical coset](/learn/S-two-book/how-it-works/circle-group#canonic-coset) $D$ of size $|D|=2^{n + \beta}$ to some polynomial $p(x, y)$ from the [polynomial space](/learn/S-two-book/how-it-works/circle-fft/basis#dimension-gap) $L'_n(F)$. The function $f \in F^D$ is represented as a vector of evaluations over the domain $D$.
Here:
- $F$ is a field extension of $\mathsf{M31}$
@@ -40,13 +40,23 @@ H_0 \xrightarrow{\pi} H_1 \xrightarrow{\pi} H_2
$$
Suppose for the sake of our example, for $i$ in $\{0,1,2\}$ the domain $H_i = D_{3-i + \beta}$, where $D_{3-i + \beta}$ is a canonical coset of size $2^{3-i + \beta}$. For each $i$ in $\{0,1,2\}$, we need to check that the functions $h_{i} \in F^{H_i}$ are "close" to the polynomial space $L'_{3-i}(F)$. For example, for $i=0$, we need to check that the function $h_{0}$ defined over $H_0 = D_{3 + \beta}$ is close to the polynomial space $L'_{3}(F)$.
-Circle FRI follows a similar divide-and-conquer strategy as the [Circle FFT](../circle-fft/algorithm) algorithm. The prover recursively decomposes the functions and folds the odd and even parts using randomness sent by the verifier. This reduction continues until the proximity test of the folded function to the polynomial space is small enough for the verifier to check directly; in this case, the prover sends the folded function directly to the verifier.
+Circle FRI follows a similar divide-and-conquer strategy as the [Circle FFT](/learn/S-two-book/how-it-works/circle-fft/algorithm) algorithm. The prover recursively decomposes the functions and folds the odd and even parts using randomness sent by the verifier. This reduction continues until the proximity test of the folded function to the polynomial space is small enough for the verifier to check directly; in this case, the prover sends the folded function directly to the verifier.
We reduce the functions up to folded functions evaluated over a domain $H_r$. For the sake of our example, consider $r=2$. Then the sequence of domains for circle FRI in the multi-domain setting is as follows:
-
- 
-
-*Figure 1: Sequence of domains for Circle FRI*
+
+

+
Figure 1: Sequence of domains for Circle FRI
As shown in the above diagram, we use two different maps, as in the circle FFT algorithm: the projection map $\pi_x$ and the squaring map $\pi$. Here, $I_i$ is the "line" domain; for every $i$ in $\{0,1,2\}$, $I_i = S_{3-i}$, where $S_{3-i}$ is the "line" domain of size $|S_{3-i}| = |D_{3-i}| / 2 = 2^{2-i+\beta}$.
@@ -64,7 +74,7 @@ Using circle FRI, the prover will prove proximity of functions $h_{0}$ to polyno
$$
h_{0,0}(x) = \frac{h_0(x,y)+h_0(x,-y)}{2} \quad, \quad h_{0,1}(x) = \frac{h_0(x,y)-h_0(x,-y)}{2y}.
$$
- Note that this decomposition and the process of finding evaluations of the decomposed functions is very similar to the circle FFT algorithm. Now the prover will fold the evaluations of $h_{0,0}$ and $h_{0,1}$ over the domain $I_0$ using the randomness $\lambda_0 \xleftarrow{\$} F$ from the verifier as follows:
+ Note that this decomposition and the process of finding evaluations of the decomposed functions is very similar to the circle FFT algorithm. Now the prover will fold the evaluations of $h_{0,0}$ and $h_{0,1}$ over the domain $I_0$ using the randomness $\lambda_0 \in F$ from the verifier as follows:
$$
g_0(x) = {\lambda_0}^2 \cdot (h_{0,0}(x) + \lambda_0 \cdot h_{0, 1}(x))
$$
@@ -77,7 +87,7 @@ Using circle FRI, the prover will prove proximity of functions $h_{0}$ to polyno
$$
g_{0,0}(2x^2 - 1) = \frac{g_0(x)+g_0(-x)}{2} \quad, \quad g_{0,1}(2x^2 - 1) = \frac{g_0(x)-g_0(-x)}{2x}.
$$
- Now the prover will fold the evaluations of $h_{1,0}$, $h_{1,1}$, $g_{0,0}$, and $g_{0,1}$ over the domain $I_1$ using the randomness $\lambda_1 \xleftarrow{\$} F$ from the verifier as follows:
+ Now the prover will fold the evaluations of $h_{1,0}$, $h_{1,1}$, $g_{0,0}$, and $g_{0,1}$ over the domain $I_1$ using the randomness $\lambda_1 \in F$ from the verifier as follows:
$$
g_1(x) = g_{0,0}(x) + \lambda_1 \cdot g_{0, 1}(x) + \lambda_1^2 \cdot (h_{1,0}(x) + \lambda_1 \cdot h_{1,1}(x))
$$
@@ -88,7 +98,7 @@ Using circle FRI, the prover will prove proximity of functions $h_{0}$ to polyno
$$
This is the final round, so the prover will send $g_2(x) \in F^{I_2}$ to the verifier in plain.
- Let us describe the protocol for the general case. In each round $i = \{0, \dots, r\}$, the prover has previous evaluations $g_{i-1} \in F^{I_{i-1}}$ (for round $i = 0$ we take $g_{-1} = 0$). The verifier sends $\lambda_i \xleftarrow{\$} F$ and the prover commits to $g_i \in F^{I_i}$, defined as follows:
+ Let us describe the protocol for the general case. In each round $i = \{0, \dots, r\}$, the prover has previous evaluations $g_{i-1} \in F^{I_{i-1}}$ (for round $i = 0$ we take $g_{-1} = 0$). The verifier sends $\lambda_i \in F$ and the prover commits to $g_i \in F^{I_i}$, defined as follows:
$$
g_i = g_{i-1,0} + \lambda_i \cdot g_{i-1,1} + \lambda_i^2 \cdot (h_{i,0} + \lambda_i \cdot h_{i,1})
@@ -123,4 +133,3 @@ Using circle FRI, the prover will prove proximity of functions $h_{0}$ to polyno
In this section, we analyze the security of the FRI protocol at a high level. There are roughly two ways a prover can cheat:
- The functions $h_{i} \in F^{H_i}$ are "far" from the polynomial space $L'_{3-i}(F)$ but somehow the prover gets lucky and ends up with $g_2 \in \mathcal{L}_0$. The [proximity gaps paper](https://eprint.iacr.org/2020/654.pdf) analyzes that this happens with negligible probability. That is, if even one of the functions $h_{i} \in F^{H_i}$ is "far" from the polynomial space $L'_{3-i}(F)$, then the function $g_2$ will be "far" from $\mathcal{L}_0$ with high probability.
- Now suppose $h_{i} \in F^{H_i}$ are "far" from the polynomial space $L'_{3-i}(F)$, then the function $g_2$ will be "far" from $\mathcal{L}_0$, but to cheat the verifier, the prover cheats in the folding rounds and sends some valid $g_2 \in \mathcal{L}_0$. Now the verifier must ensure that the prover has performed the folding correctly in each round using the oracles sent by the prover. This is exactly what the verifier checks in the query phase. From the "Toy Problem Conjecture" ([ethSTARK, Section 5.9](https://eprint.iacr.org/2021/582.pdf)), each query done by the verifier gives $\beta$ (i.e. log of blowup $B$) bits of security. Thus, for $s$ queries, the security is $s \cdot \beta$ bits.
-
diff --git a/learn/S-two-book/how-it-works/circle-group.mdx b/learn/S-two-book/how-it-works/circle-group.mdx
index 877c114b5d..c372453c61 100644
--- a/learn/S-two-book/how-it-works/circle-group.mdx
+++ b/learn/S-two-book/how-it-works/circle-group.mdx
@@ -60,7 +60,7 @@ In S-two implementation, the generator $g$ of the group $C(\textsf{M31})$ is def
$$g = (2, 1268011823)$$
Subgroups of $C(\textsf{M31})$ of size $2^n$ can be generated using the following function:
-```rust,no_run,noplayground
+```rust
pub fn subgroup_gen(n: u32) -> CirclePoint
{
assert!(n <= M31_CIRCLE_LOG_ORDER); // M31_CIRCLE_LOG_ORDER = 31
let s = 1 << (M31_CIRCLE_LOG_ORDER - n);
diff --git a/learn/S-two-book/how-it-works/circle-polynomials/index.mdx b/learn/S-two-book/how-it-works/circle-polynomials/index.mdx
index 31b19efe21..e81e971a43 100644
--- a/learn/S-two-book/how-it-works/circle-polynomials/index.mdx
+++ b/learn/S-two-book/how-it-works/circle-polynomials/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Circle Polynomials"
+title: "Introduction"
---
@@ -7,6 +7,6 @@ title: "Circle Polynomials"
This section is organized as follows:
-- **[Columns](./columns)**: How computation trace data is stored in columns, including both base field and secure field representations.
-- **[Circle Evaluations and Polynomials](./evals-and-poly)**: Point-value and coefficient representations of polynomials, their mathematical foundations, and conversion between forms.
-- **[Secure Evaluations and Polynomials](./secure-evals-and-poly)**: Extension to secure field polynomials using the quartic extension QM31, with implementation details.
\ No newline at end of file
+- **[Columns](/learn/S-two-book/how-it-works/circle-polynomials/columns)**: How computation trace data is stored in columns, including both base field and secure field representations.
+- **[Circle Evaluations and Polynomials](/learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly)**: Point-value and coefficient representations of polynomials, their mathematical foundations, and conversion between forms.
+- **[Secure Evaluations and Polynomials](/learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly)**: Extension to secure field polynomials using the quartic extension QM31, with implementation details.
diff --git a/learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly.mdx b/learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly.mdx
index e909bbc751..d8f1a1436d 100644
--- a/learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly.mdx
+++ b/learn/S-two-book/how-it-works/circle-polynomials/secure-evals-and-poly.mdx
@@ -1,9 +1,10 @@
---
-title: "Secure Evaluations"
+title: "Secure Evaluations and Polynomials"
---
+# Secure Evaluations
-Similar to `CircleEvaluation`, `SecureEvaluation` is a [point-value representation](./evals-and-poly.mdx#point-value-representation) of a polynomial whose evaluations over the `CircleDomain` are from the `SecureField` (an alias for `QM31`). This is implemented as follows:
+Similar to `CircleEvaluation`, `SecureEvaluation` is a [point-value representation](/learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly#point-value-representation) of a polynomial whose evaluations over the `CircleDomain` are from the `SecureField` (an alias for `QM31`). This is implemented as follows:
```rust
pub struct SecureEvaluation, EvalOrder> {
@@ -13,7 +14,7 @@ pub struct SecureEvaluation, EvalOrder> {
}
```
-As discussed in the [previous subsection](./columns.mdx#secure-field-columns), each `SecureField` element is represented by four base field elements and stored in four consecutive columns. Thus the evaluations are represented as `SecureColumnByCoords`, as shown above.
+As discussed in the [previous subsection](/learn/S-two-book/how-it-works/circle-polynomials/columns#secure-field-columns), each `SecureField` element is represented by four base field elements and stored in four consecutive columns. Thus the evaluations are represented as `SecureColumnByCoords`, as shown above.
Similar to `CircleEvaluation`, we can interpolate a `SecureCirclePoly` with coefficients from the `SecureField` as shown below:
@@ -34,12 +35,16 @@ impl SecureEvaluation {
# Secure Circle Polynomials
-Similar to `CirclePoly`, `SecureCirclePoly` is a [coefficient representation](./evals-and-poly.mdx#coefficient-representation) of a polynomial whose coefficients are from the `SecureField`. As discussed in the [earlier section](./evals-and-poly.mdx#eq-circle-poly), we can define a circle polynomial as follows:
+Similar to `CirclePoly`, `SecureCirclePoly` is a [coefficient representation](/learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly#coefficient-representation) of a polynomial whose coefficients are from the `SecureField`. As discussed in the [earlier section](/learn/S-two-book/how-it-works/circle-polynomials/evals-and-poly#eq-circle-poly), we can define a circle polynomial as follows:
+
$$p(x, y) = \sum_{j=0}^{2^n -1} v_j \cdot y^{j_0} \cdot x^{j_1} \cdot \pi(x)^{j_2} \cdot \pi^2(x)^{j_3} \cdots \pi^{n-2}(x)^{j_{n-1}}$$
+
Here, $v_j$ are the coefficients from `SecureField` (i.e. $\mathsf{QM31}$). We can represent the coefficient $v_j$ using four elements from the `BaseField` (i.e. $\mathsf{M31}$) as follows:
+
$$v_j = (a_j + i \cdot b_j) + (c_j + i \cdot d_j) \cdot u$$
where $a_j, b_j, c_j, d_j \in \mathsf{M31}$. Substituting the above representation in the equation of $p(x, y)$ we get:
+
$$p(x, y) = \sum_{j=0}^{2^n -1} \Big(a_j + i \cdot b_j + u \cdot c_j + i \cdot u \cdot d_j \Big) \cdot y^{j_0} \cdot x^{j_1} \cdot \pi(x)^{j_2} \cdot \pi^2(x)^{j_3} \cdots \pi^{n-2}(x)^{j_{n-1}}$$
$$p(x, y) = \sum_{j=0}^{2^n -1} a_j \cdot y^{j_0} \cdot x^{j_1} \cdot \pi(x)^{j_2} \cdots \pi^{n-2}(x)^{j_{n-1}} \\ + \\ i \cdot \sum_{j=0}^{2^n -1} b_j \cdot y^{j_0} \cdot x^{j_1} \cdot \pi(x)^{j_2} \cdots \pi^{n-2}(x)^{j_{n-1}} \\ + \\$$
@@ -71,4 +76,4 @@ Similar to `CirclePoly`, we can evaluate the `SecureCirclePoly` at points on the
}
```
-In the next section, we will see how the `interpolate` and `evaluate` functions convert between the two polynomial representations using Circle FFT. As you may have noticed, the twiddles are precomputed for efficiency, and we will explore this in the next section on Circle FFT.
\ No newline at end of file
+In the next section, we will see how the `interpolate` and `evaluate` functions convert between the two polynomial representations using Circle FFT. As you may have noticed, the twiddles are precomputed for efficiency, and we will explore this in the next section on Circle FFT.
diff --git a/learn/S-two-book/how-it-works/index.mdx b/learn/S-two-book/how-it-works/index.mdx
index 5effe6f5af..82466378e4 100644
--- a/learn/S-two-book/how-it-works/index.mdx
+++ b/learn/S-two-book/how-it-works/index.mdx
@@ -1,5 +1,5 @@
---
-title: "S-two: Under the Hood"
+title: "Introduction"
---
@@ -7,13 +7,13 @@ title: "S-two: Under the Hood"
The following topics are covered:
-- [**Mersenne Primes**](./mersenne-prime): Introduction to the Mersenne31 prime field used in S-two for efficient arithmetic.
-- [**Circle Group**](./circle-group): Explains the algebraic structure underlying FFT and polynomial operations.
-- [**Circle Polynomials**](./circle-polynomials/index): Details the representation and evaluation of polynomials in the circle group.
-- [**Circle FFT**](./circle-fft/index): Describes the fast Fourier transform algorithm adapted for the circle group.
-- [**Vector Commitment Scheme (VCS)**](./vcs/index): Covers the use of Merkle trees for committing to vectors and enabling efficient proofs of inclusion.
-- [**AIR to Composition Polynomial**](./air/index): Shows how algebraic constraints are encoded as polynomials for proof generation.
-- [**Circle FRI**](./circle-fri/index): Explains the FRI protocol for low-degree testing of polynomials over the circle group.
-- [**Polynomial Commitment Scheme (PCS)**](./pcs/index): Describes the protocol for committing to and opening polynomials with soundness guarantees.
-- [**Proof Generation and Verification**](./stark_proof/index): Walks through the process of generating and verifying a STARK proof in S-two.
-- [**Lookups**](./lookups): Discusses lookup arguments and their implementation in S-two for efficient constraint checking.
+- [**Mersenne Primes**](/learn/S-two-book/how-it-works/mersenne-prime): Introduction to the Mersenne31 prime field used in S-two for efficient arithmetic.
+- [**Circle Group**](/learn/S-two-book/how-it-works/circle-group): Explains the algebraic structure underlying FFT and polynomial operations.
+- [**Circle Polynomials**](/learn/S-two-book/how-it-works/circle-polynomials/index): Details the representation and evaluation of polynomials in the circle group.
+- [**Circle FFT**](/learn/S-two-book/how-it-works/circle-fft/index): Describes the fast Fourier transform algorithm adapted for the circle group.
+- [**Vector Commitment Scheme**](/learn/S-two-book/how-it-works/vcs/index): Covers the use of Merkle trees for committing to vectors and enabling efficient proofs of inclusion.
+- [**AIR to Composition Polynomial**](/learn/S-two-book/how-it-works/air/index): Shows how algebraic constraints are encoded as polynomials for proof generation.
+- [**Circle FRI**](/learn/S-two-book/how-it-works/circle-fri/index): Explains the FRI protocol for low-degree testing of polynomials over the circle group.
+- [**Polynomial Commitment Scheme**](/learn/S-two-book/how-it-works/pcs/index): Describes the protocol for committing to and opening polynomials with soundness guarantees.
+- [**Proof Generation and Verification**](/learn/S-two-book/how-it-works/stark_proof/index): Walks through the process of generating and verifying a STARK proof in S-two.
+- [**Lookups**](/learn/S-two-book/how-it-works/lookups): Discusses lookup arguments and their implementation in S-two for efficient constraint checking.
diff --git a/learn/S-two-book/how-it-works/lookups.mdx b/learn/S-two-book/how-it-works/lookups.mdx
index bdc8c04c5b..f55ca670b0 100644
--- a/learn/S-two-book/how-it-works/lookups.mdx
+++ b/learn/S-two-book/how-it-works/lookups.mdx
@@ -85,7 +85,7 @@ To create a constraint over the LogUp columns, S-two modifies the LogUp columns
*Figure 4: Cumulative sum columns*
-A constraint is created using the values of two rows of the cumulative sum LogUp column. For example, as in [Figure 5](#fig-lookup-implementation-5), we can create a constraint by subtracting $row_1$ from $row_2$ and checking that it equals the LogUp fraction created using values $a_2$ and $m_2$:
+A constraint is created using the values of two rows of the cumulative sum LogUp column. For example, as in **Figure 5**, we can create a constraint by subtracting $row_1$ from $row_2$ and checking that it equals the LogUp fraction created using values $a_2$ and $m_2$:
$$
\frac{m_2}{X - a_2} = row\_2 - row\_1
@@ -109,7 +109,7 @@ Where $\text{avg}$ is a witness value provided by the prover.
*Figure 6: Trick to not create a separate constraint for the first row*
-The right column in [Figure 6](#fig-lookup-implementation-6) is the final form of the LogUp column that S-two commits to as part of the interaction trace.
+The right column in **Figure 6** is the final form of the LogUp column that S-two commits to as part of the interaction trace.
## Batching
diff --git a/learn/S-two-book/how-it-works/mersenne-prime.mdx b/learn/S-two-book/how-it-works/mersenne-prime.mdx
index 99787c6fed..152ba6e4e8 100644
--- a/learn/S-two-book/how-it-works/mersenne-prime.mdx
+++ b/learn/S-two-book/how-it-works/mersenne-prime.mdx
@@ -17,15 +17,28 @@ The key advantage is extremely cheap modular reduction after a 31-bit multiplica
Suppose $x = a \cdot b$ then we can decompose $x$ into two 31-bit values $b$ and $s$, such that $x = 2^{31} \cdot b + s$, as shown in the following figure.
-
- 
-
-*Figure 1: Product decomposition*
+
+

+
Figure 1: Product decomposition
To perform modular reduction, we start with:
+
$$x \equiv (2^{31} \cdot b + s) \quad mod \quad (2^{31} - 1)$$
+
Substituting $2^{31} \equiv 1 \mod (2^{31} - 1)$ gives:
+
$$x \equiv (b + s) \quad mod \quad (2^{31} - 1)$$
Since $b$ and $s$ are both 31-bit values, they can be directly represented as field elements. Consequently, modular reduction is performed with a single field addition. This makes arithmetic over Mersenne primes exceptionally fast, making them an ideal choice for our STARK protocol. The above field multiplication is implemented as:
@@ -58,8 +71,8 @@ impl_field!(M31, P);
Since we work over extensions of $\textsf{M31}$, it has the type alias `BaseField`, as follows:
-```rust,no_run,noplayground
-{{#webinclude https://raw.githubusercontent.com/starkware-libs/stwo/0790eba46b8af5697083d84fb75bd34b08a0b31f/crates/stwo/src/core/fields/m31.rs 33}}
+```rust
+pub type BaseField = M31;
```
# Extensions of Mersenne Prime Field
diff --git a/learn/S-two-book/how-it-works/pcs/index.mdx b/learn/S-two-book/how-it-works/pcs/index.mdx
index b374c2fcf9..0ecd491a71 100644
--- a/learn/S-two-book/how-it-works/pcs/index.mdx
+++ b/learn/S-two-book/how-it-works/pcs/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Polynomial Commitment Scheme"
+title: "Introduction"
---
@@ -7,6 +7,6 @@ title: "Polynomial Commitment Scheme"
This section is organized as follows:
-- [**Overview**](./overview): Describes the polynomial commitment scheme of S-two.
-- [**PCS Prover**](./prover): Details the implementation of the prover for the polynomial commitment scheme, including commitment and opening protocol.
-- [**PCS Verifier**](./verifier): Describes the verifier implementation for checking commitments and evaluation proofs.
\ No newline at end of file
+- [**Overview**](/learn/S-two-book/how-it-works/pcs/overview): Describes the polynomial commitment scheme of S-two.
+- [**PCS Prover**](/learn/S-two-book/how-it-works/pcs/prover): Details the implementation of the prover for the polynomial commitment scheme, including commitment and opening protocol.
+- [**PCS Verifier**](/learn/S-two-book/how-it-works/pcs/verifier): Describes the verifier implementation for checking commitments and evaluation proofs.
\ No newline at end of file
diff --git a/learn/S-two-book/how-it-works/pcs/overview.mdx b/learn/S-two-book/how-it-works/pcs/overview.mdx
index 885ea8aacf..2f695411f7 100644
--- a/learn/S-two-book/how-it-works/pcs/overview.mdx
+++ b/learn/S-two-book/how-it-works/pcs/overview.mdx
@@ -50,7 +50,7 @@ One key property of a polynomial commitment scheme is binding. Informally, the b
## Out of Domain Sampling
-Out-of-domain sampling relates to the notion of "closeness" described in the [FRI section](../circle-fri/overview.mdx#introduction). Informally, the FRI protocol tests whether a function provided by the prover is "close" to some bounded degree polynomial. There are two notions of "closeness":
+Out-of-domain sampling relates to the notion of "closeness" described in the [FRI section](/learn/S-two-book/how-it-works/circle-fri/overview#introduction). Informally, the FRI protocol tests whether a function provided by the prover is "close" to some bounded degree polynomial. There are two notions of "closeness":
1. **Unique Decoding Regime**: We operate in this regime if there is at most a single polynomial which is "close" to the function provided by the prover. If the function is "close" to a single polynomial, then we can infer that the function represents that _unique_ polynomial.
2. **List Decoding Regime**: We operate in this regime if there is a list of polynomials which are "close" to the function provided by the prover. In this case, since the function can be "close" to a _list_ of polynomials, we cannot be sure that it represents a unique polynomial.
@@ -62,7 +62,7 @@ In practice, we are usually operate in the list decoding regime. So there can be
To bind the prover to a unique polynomial from the list, we ask the prover to open the polynomial at an out-of-domain point. This is also referred to as _Domain Extension for Eliminating Pretenders_ (or the _DEEP method_). This is the informal motivation for out-of-domain sampling. For more details, please refer to ["A summary on the FRI low degree test"](https://eprint.iacr.org/2022/1216.pdf).
-As we have seen in the [Security Analysis section](../circle-fri/overview.mdx#security-analysis), we can improve security by increasing the number of verifier queries. But this will lead to more prover work, because the prover will have to send a Merkle decommitment for each verifier query and also increase the proof size. We will now see a method to increase the security of our protocol without significantly increasing the prover's work.
+As we have seen in the [Security Analysis section](/learn/S-two-book/how-it-works/circle-fri/overview#security-analysis), we can improve security by increasing the number of verifier queries. But this will lead to more prover work, because the prover will have to send a Merkle decommitment for each verifier query and also increase the proof size. We will now see a method to increase the security of our protocol without significantly increasing the prover's work.
## Proof of Work
diff --git a/learn/S-two-book/how-it-works/pcs/prover.mdx b/learn/S-two-book/how-it-works/pcs/prover.mdx
index 2285dc6574..fa54faaab4 100644
--- a/learn/S-two-book/how-it-works/pcs/prover.mdx
+++ b/learn/S-two-book/how-it-works/pcs/prover.mdx
@@ -9,7 +9,7 @@ In this section, we will see the implementation of the commitment scheme prover.
## Commitment Tree Prover
-The `CommitmentTreeProver` struct represents the data for a single Merkle tree commitment. As we have seen in the [Merkle tree section](../vcs/merkle_prover.mdx#merkle-prover), we can commit to multiple polynomials of different degrees in the same Merkle tree. It is implemented as follows:
+The `CommitmentTreeProver` struct represents the data for a single Merkle tree commitment. As we have seen in the [Merkle tree section](/learn/S-two-book/how-it-works/vcs/merkle_prover#merkle-prover), we can commit to multiple polynomials of different degrees in the same Merkle tree. It is implemented as follows:
```rust
pub struct CommitmentTreeProver
, MC: MerkleChannel> {
@@ -22,7 +22,7 @@ pub struct CommitmentTreeProver, MC: MerkleChannel> {
Here, `pub type ColumnVec = Vec`. It contains the following fields:
- `polynomials`: The set of polynomials committed in a single Merkle tree.
- `evaluations`: The evaluations of these polynomials over their respective domains.
-- `commitment`: The `MerkleProver` struct as described in the [Merkle tree section](../vcs/merkle_prover.mdx#merkleprover-structure).
+- `commitment`: The `MerkleProver` struct as described in the [Merkle tree section](/learn/S-two-book/how-it-works/vcs/merkle_prover#merkleprover-structure).
It is initialized as follows:
```rust
@@ -77,7 +77,7 @@ The security of the polynomial commitment scheme is computed as:
self.pow_bits + self.fri_config.security_bits()
}
```
-- `twiddles`: This contains [precomputed twiddle factors](../circle-fft/twiddles.mdx#twiddle-tree).
+- `twiddles`: This contains [precomputed twiddle factors](/learn/S-two-book/how-it-works/circle-fft/twiddles#twiddle-tree).
Now we will see some key functions defined on the `CommitmentSchemeProver` struct.
@@ -222,16 +222,16 @@ Here is a detailed breakdown:
- The `sampled_values` are mixed into the channel, ensuring they are bound to the proof and used for subsequent randomness generation.
2. **Compute FRI Quotients**:
- - The function computes FRI quotient polynomials using `compute_fri_quotients` to open the committed polynomials at sampled points in `samples`. This follows the same quotienting process as described in the [overview section](./overview.mdx#polynomial-commitment-scheme).
+ - The function computes FRI quotient polynomials using `compute_fri_quotients` to open the committed polynomials at sampled points in `samples`. This follows the same quotienting process as described in the [overview section](/learn/S-two-book/how-it-works/pcs/overview#polynomial-commitment-scheme).
3. **FRI Commitment Phase**:
- - The FRI protocol is run on the quotient polynomials, committing to their evaluations in Merkle trees and initializing the `fri_prover`. For more details, refer to the [FRI prover section](../circle-fri/fri_prover).
+ - The FRI protocol is run on the quotient polynomials, committing to their evaluations in Merkle trees and initializing the `fri_prover`. For more details, refer to the [FRI prover section](/learn/S-two-book/how-it-works/circle-fri/fri_prover).
4. **Proof of Work**:
- A proof-of-work step is performed, with the result mixed into the channel.
6. **FRI Decommitment Phase**:
- - The function generates random query positions using the channel and decommits the FRI layers at those positions, providing Merkle decommitments for all queried values. For more details, refer to the [FRI prover section](../circle-fri/fri_prover).
+ - The function generates random query positions using the channel and decommits the FRI layers at those positions, providing Merkle decommitments for all queried values. For more details, refer to the [FRI prover section](/learn/S-two-book/how-it-works/circle-fri/fri_prover).
7. **Decommitment of Committed Trees**:
- For each commitment tree, the function decommits the Merkle tree at the FRI query positions, providing the queried values and authentication paths.
diff --git a/learn/S-two-book/how-it-works/pcs/verifier.mdx b/learn/S-two-book/how-it-works/pcs/verifier.mdx
index e956a8b0df..d04baefc2b 100644
--- a/learn/S-two-book/how-it-works/pcs/verifier.mdx
+++ b/learn/S-two-book/how-it-works/pcs/verifier.mdx
@@ -9,7 +9,7 @@ In this section, we describe the implementation of the polynomial commitment sch
## Commitment Scheme Verifier
-The `CommitmentSchemeVerifier` struct manages the verification process for the polynomial commitment scheme. It maintains a collection of [Merkle verifiers](../vcs/merkle_verifier) (one for each commitment tree) and the protocol configuration.
+The `CommitmentSchemeVerifier` struct manages the verification process for the polynomial commitment scheme. It maintains a collection of [Merkle verifiers](/learn/S-two-book/how-it-works/vcs/merkle_verifier) (one for each commitment tree) and the protocol configuration.
```rust
pub struct CommitmentSchemeVerifier {
pub trees: TreeVec>,
diff --git a/learn/S-two-book/how-it-works/stark_proof/index.mdx b/learn/S-two-book/how-it-works/stark_proof/index.mdx
index 31a98704f7..ec854ce542 100644
--- a/learn/S-two-book/how-it-works/stark_proof/index.mdx
+++ b/learn/S-two-book/how-it-works/stark_proof/index.mdx
@@ -1,12 +1,12 @@
---
-title: "Proof Generation and Verification"
+title: "Introduction"
---
-> In this final section, we cover the implementation of the STARK prover and verifier. This section brings together all the components from previous sections and describes the complete STARK proof generation and verification algorithms. We explain the `prove` and `verify` functions used to generate and verify proofs, as introduced in the [Writing a Simple AIR section](../../air-development/writing-a-simple-air/proving-an-air).
+> In this final section, we cover the implementation of the STARK prover and verifier. This section brings together all the components from previous sections and describes the complete STARK proof generation and verification algorithms. We explain the `prove` and `verify` functions used to generate and verify proofs, as introduced in the [Writing a Simple AIR section](/learn/S-two-book/air-development/writing-a-simple-air/proving-an-air).
This section is organized as follows:
-- [**STARK Prover**](./prove): Details the implementation of the proof generation algorithm.
-- [**STARK Verifier**](./verify): Describes the verification algorithm.
+- [**STARK Prover**](/learn/S-two-book/how-it-works/stark_proof/prove): Details the implementation of the proof generation algorithm.
+- [**STARK Verifier**](/learn/S-two-book/how-it-works/stark_proof/verify): Describes the verification algorithm.
diff --git a/learn/S-two-book/how-it-works/stark_proof/prove.mdx b/learn/S-two-book/how-it-works/stark_proof/prove.mdx
index 5472d4ba6e..f55239175a 100644
--- a/learn/S-two-book/how-it-works/stark_proof/prove.mdx
+++ b/learn/S-two-book/how-it-works/stark_proof/prove.mdx
@@ -73,9 +73,9 @@ Let us go through the function in detail.
It takes the following as input:
-- `components`: A list of AIR components. For more details, refer to the [Components](../air/components) and [Prover Components](../air/prover_components) sections.
+- `components`: A list of AIR components. For more details, refer to the [Components](/learn/S-two-book/how-it-works/air/components) and [Prover Components](/learn/S-two-book/how-it-works/air/prover_components) sections.
- `channel`: A Fiat-Shamir channel for non-interactive randomness.
-- `commitment_scheme`: A `CommitmentSchemeProver` for committing to trace and composition polynomials. For more details, refer to the [PCS Prover section](../pcs/prover).
+- `commitment_scheme`: A `CommitmentSchemeProver` for committing to trace and composition polynomials. For more details, refer to the [PCS Prover section](/learn/S-two-book/how-it-works/pcs/prover).
It outputs a `StarkProof` object if successful, or a `ProvingError` if any constraint is not satisfied. The `StarkProof` object is a wrapper around `CommitmentSchemeProof`.
@@ -96,7 +96,7 @@ pub struct StarkProof(pub CommitmentSchemeProof);
3. **Composition Polynomial Construction**
- A `random_coeff` is drawn from the channel.
- - The `composition_poly` is computed as a random linear combination of all constraint quotient polynomials, using powers of the random coefficient. For more details, refer to the [Prover Components](../air/prover_components) section.
+ - The `composition_poly` is computed as a random linear combination of all constraint quotient polynomials, using powers of the random coefficient. For more details, refer to the [Prover Components](/learn/S-two-book/how-it-works/air/prover_components) section.
4. **Commit to the Composition Polynomial**
@@ -104,7 +104,7 @@ pub struct StarkProof(pub CommitmentSchemeProof);
5. **Out-of-Domain Sampling (OODS)**
- - An `oods_point` is drawn randomly from the channel. This point is used to bind the prover to a unique low-degree polynomial, preventing ambiguity in the list decoding regime. For more details, refer to the [Out-of-Domain Sampling](../pcs/overview.mdx#out-of-domain-sampling) section.
+ - An `oods_point` is drawn randomly from the channel. This point is used to bind the prover to a unique low-degree polynomial, preventing ambiguity in the list decoding regime. For more details, refer to the [Out-of-Domain Sampling](/learn/S-two-book/how-it-works/pcs/overview#out-of-domain-sampling) section.
6. **Determine Sample Points**
@@ -112,7 +112,7 @@ pub struct StarkProof(pub CommitmentSchemeProof);
7. **Openings and Proof Generation**
- - The `commitment_scheme` is asked to open all committed polynomials at the sampled points, producing the required evaluations and Merkle authentication paths. This is handled by the `prove_values` function, which also integrates the FRI protocol for low-degree testing. For more details, refer to the [PCS Prover](../pcs/prover.mdx#prove) section.
+ - The `commitment_scheme` is asked to open all committed polynomials at the sampled points, producing the required evaluations and Merkle authentication paths. This is handled by the `prove_values` function, which also integrates the FRI protocol for low-degree testing. For more details, refer to the [PCS Prover](/learn/S-two-book/how-it-works/pcs/prover#prove) section.
8. **Sanity Check**
diff --git a/learn/S-two-book/how-it-works/stark_proof/verify.mdx b/learn/S-two-book/how-it-works/stark_proof/verify.mdx
index b144da9973..d37a532a75 100644
--- a/learn/S-two-book/how-it-works/stark_proof/verify.mdx
+++ b/learn/S-two-book/how-it-works/stark_proof/verify.mdx
@@ -75,9 +75,9 @@ Let us go through the function in detail.
The `verify` function is the entry point for verifying a STARK proof. It takes as input:
-- `components`: A list of AIR components. For more details, refer to the [Components](../air/components) section.
+- `components`: A list of AIR components. For more details, refer to the [Components](/learn/S-two-book/how-it-works/air/components) section.
- `channel`: A Fiat-Shamir channel for non-interactive randomness.
-- `commitment_scheme`: A `CommitmentSchemeVerifier` for verifying Merkle commitments and FRI proofs. For more details, refer to the [PCS Verifier section](../pcs/verifier).
+- `commitment_scheme`: A `CommitmentSchemeVerifier` for verifying Merkle commitments and FRI proofs. For more details, refer to the [PCS Verifier section](/learn/S-two-book/how-it-works/pcs/verifier).
- `proof`: The `StarkProof` object to be verified.
diff --git a/learn/S-two-book/how-it-works/vcs/index.mdx b/learn/S-two-book/how-it-works/vcs/index.mdx
index 645fd3beae..c5738fa92d 100644
--- a/learn/S-two-book/how-it-works/vcs/index.mdx
+++ b/learn/S-two-book/how-it-works/vcs/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Vector Commitment Scheme"
+title: "Introduction"
---
@@ -9,6 +9,6 @@ title: "Vector Commitment Scheme"
This section is organized as follows:
-- **[Hash Functions](./hash_functions)**: Overview of the hash function traits and implementations used in Merkle trees.
-- **[Merkle Prover](./merkle_prover)**: The commitment process that builds Merkle trees from column data and the decommitment process that generates proofs for queried elements.
-- **[Merkle Verifier](./merkle_verifier)**: The verification process that checks the validity of Merkle proofs against a committed root.
+- **[Hash Functions](/learn/S-two-book/how-it-works/vcs/hash_functions)**: Overview of the hash function traits and implementations used in Merkle trees.
+- **[Merkle Prover](/learn/S-two-book/how-it-works/vcs/merkle_prover)**: The commitment process that builds Merkle trees from column data and the decommitment process that generates proofs for queried elements.
+- **[Merkle Verifier](/learn/S-two-book/how-it-works/vcs/merkle_verifier)**: The verification process that checks the validity of Merkle proofs against a committed root.
diff --git a/learn/S-two-book/how-it-works/vcs/merkle_prover.mdx b/learn/S-two-book/how-it-works/vcs/merkle_prover.mdx
index b24ee4130c..2ae2c088b6 100644
--- a/learn/S-two-book/how-it-works/vcs/merkle_prover.mdx
+++ b/learn/S-two-book/how-it-works/vcs/merkle_prover.mdx
@@ -72,10 +72,21 @@ Let's walk through the function step by step:
Suppose the input column data is as shown below:
-
- 
-
-*Figure 1: Example column data to commit using a Merkle tree*
+
+

+
Figure 1: Example column data to commit using a Merkle tree
For this example, the columns are already in the sorted order: $\textcolor{red}{\text{\textit{Column} 0}}$, $\textcolor{green}{\text{\textit{Column} 1}}$, $\textcolor{blue}{\text{\textit{Column} 2}}$ (from longest to shortest). We will now compute the hashes stored at each layer.
@@ -90,10 +101,21 @@ For this example, the columns are already in the sorted order: $\textcolor{red}{
The resulting Merkle tree is illustrated below:
-
- 
-
-*Figure 2: Merkle tree structure after commitment*
+
+

+
Figure 2: Merkle tree structure after commitment
@@ -222,7 +244,7 @@ Let's break down the function step by step:
### Example: Decommit Process
-For the column data in [Figure 1](#fig-merkle-tree-data), consider the query indices $[(2, [0]), (1, [1])]$, where the query indices are maps from log size to a vector of query indices for columns of that size. This corresponds to querying the following elements:
+For the column data in **Figure 1**, consider the query indices $[(2, [0]), (1, [1])]$, where the query indices are maps from log size to a vector of query indices for columns of that size. This corresponds to querying the following elements:
- The 0th element of columns of size $2^2=4$: $\textcolor{red}{a}$ from $\textcolor{red}{\text{\textit{Column} 0}}$ and $\textcolor{green}{p}$ from $\textcolor{green}{\text{\textit{Column} 1}}$.
- The 1st element of the column of size $2^1=2$: $\textcolor{blue}{v}$ from $\textcolor{blue}{\text{\textit{Column} 2}}$.
@@ -272,4 +294,4 @@ Below is a walkthrough of the main loop in the `decommit` function, showing the
- `hash_witness`: $[h_{01}, h_{10}, h_{11}]$
- `column_witness`: $[\textcolor{blue}{u}]$
-In the next section, we describe the verification process to verify the inclusion of the queried values using the Merkle decommitment.
\ No newline at end of file
+In the next section, we describe the verification process to verify the inclusion of the queried values using the Merkle decommitment.
diff --git a/learn/S-two-book/how-it-works/vcs/merkle_verifier.mdx b/learn/S-two-book/how-it-works/vcs/merkle_verifier.mdx
index 66f1fab1f4..814a808cd7 100644
--- a/learn/S-two-book/how-it-works/vcs/merkle_verifier.mdx
+++ b/learn/S-two-book/how-it-works/vcs/merkle_verifier.mdx
@@ -173,7 +173,7 @@ Let's break down the function step by step:
### Example: Verify the decommitment
-The same example from the [decommit process](./merkle_prover.mdx#example-decommit-process) is used to verify the output of the `decommit` function. The input to the `verify` function is as follows:
+The same example from the [decommit process](/learn/S-two-book/how-it-works/vcs/merkle_prover#example%3A-decommit-process) is used to verify the output of the `decommit` function. The input to the `verify` function is as follows:
- `queries_per_log_size`: $[(2, [0]), (1, [1])]$
- `queried_values`: $[\textcolor{red}{a}, \textcolor{green}{p}, \textcolor{blue}{v}]$
- `decommitment`:
@@ -226,4 +226,4 @@ Below is a walkthrough of the main loop in the `verify` function, showing how th
**Final verification:**
- Check that all iterators are exhausted: `hash_witness`, `queried_values`, and `column_witness` should all be empty.
- Compare the computed `root` with the expected root stored in the `MerkleVerifier`.
-- If they match, return `Ok(())`; otherwise, return `Err(MerkleVerificationError::RootMismatch)`.
\ No newline at end of file
+- If they match, return `Ok(())`; otherwise, return `Err(MerkleVerificationError::RootMismatch)`.
diff --git a/learn/S-two-book/why-stwo.mdx b/learn/S-two-book/why-stwo.mdx
index 4daec1797c..004255434d 100644
--- a/learn/S-two-book/why-stwo.mdx
+++ b/learn/S-two-book/why-stwo.mdx
@@ -17,18 +17,17 @@ S-two's backend is also optimized for prover performance. This is due to largely
2. Even amongst multiple STARK backends, however, S-two provides state-of-the-art prover performance by running the **Mersenne-31 prime field** (modulo $2^{31} - 1$), which is faster than another popular 32-bit prime field like BabyBear (modulo $2^{31} - 2^{27} + 1$). We suggest going through [this post](https://blog.zksecurity.xyz/posts/circle-starks-1/) for a breakdown of why this is the case.
-3. Finally, S-two offers **various CPU and GPU optimizations** that improves prover performance as shown in [Figure 1](#fig-optimizations) below. It can also be compiled to WASM, allowing for fast proving in web environments.
+3. Finally, S-two offers **various CPU and GPU optimizations** that improves prover performance as shown in [Figure 1](#figure-1) below. It can also be compiled to WASM, allowing for fast proving in web environments.
-
+

-
*Figure 1: Prover performance optimizations in S-two*
One of the drawbacks of STARKs is that they have a larger proof size compared to elliptic curve-based SNARKs. One way to mitigate this drawback is by batching multiple proofs together to form a single proof.
-
-On zero-knowledge:
+
+**On zero-knowledge:**
As of the time of this writing, S-two does not provide the "zero-knowledge" feature. "Zero-knowledge" here refers to the fact that the proof should not reveal any additional information other than the validity of the statement, which is not true for S-two as it reveals to the verifier commitments to its witness values without hiding them by e.g. adding randomness. This reveals **some information** about the witness values, which may be used in conjunction with other information to infer the witness values.
-
+