The most realistic way to prove Gasper consensus today

ethereum full consensus

Into

The possibility of implementing a full Gasper consensus, or Ethereum consensus, could usher in a new era of trustless and permissionless cross-chain communications. This can certainly be achieved with the aid of zk-SNARKs, which outsource and package proving computations across the votes of one million validators and execute on-chain EVM-compatible smart contract verification. The reason we haven’t seen any working solutions yet is that current approaches and tools cannot make it cost-efficient in terms of computational power.

Here, we are going to share our approach, which appears to be more viable.

What is wrong with Brute Force approaches?

First, let’s examine the current efforts by using what we call the Brute Force approach. This method meticulously follows the entire consensus mechanism step-by-step, aiming to prove a ‘Proved Epoch’ trusted by everyone. The process involves:

  • Validating the Validator List: Retrieving the full list of validators from the state and proving its validity.
  • Sorting Active Validators: Organizing the active validator list by correct balances from the full list.
  • Shuffling Validators: Using a shuffle function to evenly distribute validators across 64 committees for each of the 32 blocks within an epoch.
  • Validating Attestations: Gathering and verifying BLS signatures from all committees.
  • Proving Consensus: Demonstrating that the total funds voting exceed two-thirds of all voting funds, thereby proving consensus.

When discussing ‘proof,’ it entails constructing circuits, generating proof with actual inputs, and verifying this proof with an off-chain or on-chain verifier.

Challenges and Complexity:

  • Merkle Tree Proving: To Merkleize a full validator tree with a height of 40, which can be pre-computed to 21 to accommodate the current number of validators, we need 2**21 hashes for 1.5 million validators. This includes 7 hashes per leaf plus 21 hashes for the Merkle tree height, totaling 1.5 million times 28, which equals 42 million SHA-256 hashes. From this benchmark, we can assume that to prove the entire validator list can take more than 11,666 hours of server time, or approximately $35,000 for one proof using a $3/hour machine.

42,000,000 ∗ 1sec / 3600sec = 11,666 ∗ 3$/h = $35k

  • Shuffle Function Circuit: It requires about 50 million SHA-256 hashes, according to the full shuffling function implemented here: GitHub – ConsenSys/teku.

50,000,000 ∗ 1sec / 3600sec = 13,889 ∗ 3$/h ​= $41.7k

  • Merkelizing Attestation Messages: This step involves additional thousands of SHA-256 hash calculations within the circuit for 2,000 committees.
  • Technical Limitations: The maximum constraints for proof systems prevent running these computations as a single process.

Please note: The figures provided are approximate and meant to illustrate the scale of the computations.

Assuming parallelization of the proof-building process, proving one finalized checkpoint every epoch would cost around $76.7k, given the need for $3/hour CPU time for 25,139 hours.

The brute-force approach, therefore, necessitates:

  • An investment of $76.7k every 6 minutes.
  • Tools capable of constructing recursive proofs in parallel to facilitate real-time proving.

However, the feasibility of this approach is limited by the high computational demand and the use of SHA-256, a function that is not zk-friendly for Merkelization in the Beacon Chain. This underscores the impracticality of real-time Ethereum consensus proving through brute force, given the comparable costs to maintaining the entire validator set.

Diff Changes Approach

This approach is developed with several goals in mind:

  • To avoid and minimize hard and zk-unfriendly periodic computations.
  • To address the inherent impossibility of proving full consensus in real-time.
  • To ensure the architecture can be delivered as quickly as possible.

The core ideas behind this approach include:

  • Proof Epoch with Gaps: For instance, we can prove every 10th Epoch with full consensus, and then prove the missing checkpoints by hashing components of each missing block. This method relies on the principle that there is only one correct way to obtain the right hash values between proved epochs.
  • Recalculate the Validators Tree with the Poseidon Hash Function: Create an alternative Merkle tree for the validator set, employing the Poseidon hash function. Validators are stored within this tree. The root of this tree is utilized to verify the validity of the validator list sent into the circuit as a private input. This approach allows for more efficient recalculations and verifications.
  • Follow Each Epoch’s Validator Changes: With each circuit run (a new epoch being proved as finalized), we prove changes in validators since the last proved epoch, using real beacon chain roots and SHA functions. This is done to track a reasonable amount of validators. Then, having updated and proved validators against the beacon’s SHA root, we update the internal Poseidon tree and store the new Poseidon tree root as a public input (also in the contract’s storage).
  • Not use shuffling function: As a private input, we put inside circuit list of validators, ordered as shuffled active validators at the beginning, then other inactive validators in the end of input data. Since we prove with poseidon tree that all of validators sent as inputs are valid in the finalizing epoch, we ‘prove’ that all validators in list are unique. Shuffling proved ‘inherently’ when we verify aggregated signatures. (We can not have valid signatures for incorrect shuffled validators)
  • Verify attestations signatures and calculate attestations balance: Since we have proved validators shuffled list, we can use it inside the signature proof circuit(s), and calculate total attestations voters balance. We prove consensus if total attestations balance is more than 2/3 of total active validators balance.
  • We use only a portion of the attestations, ensuring that the selected part represents more than 2/3 of the active validators’ balance.

These strategies are designed to streamline the consensus process, making it more feasible to manage and verify in a timely and computationally efficient manner.

General algorithm

Given:

  • A finalized checkpoint, referred to as OLD,
  • A proven set of validators from the state of the finalized checkpoint,
  • A checkpoint currently being finalized, referred to as NEW,
  • A proven set of validators that were changed or added to the state since the previous finalized checkpoint,
  • A set of attestations that:
    • Are targeted at the NEW checkpoint,
    • Are signed by validators from the set, including those who were updated,
    • Ensure each voter participated in only one attestation,
    • Confirm the total balance of voters is more than 2/3 of the total balance of active validators,

We prove that the NEW checkpoint is finalized.

Poseidon validators tree

Since recomputing of the full validators set against finalized checkpoint state requires sha256, it takes overwhelming amount of computation and can not be used in proof system.

Instead, we propose to have validators set as poseidon merkle tree, and update this tree periodically.

First, we choose some finalized checkpoint, and treat is as an initial trusted point, and declare that we trust validators set, given from this checkpoint, without any proofs. It is allowed since we need to have some initial trusted point anyway.

We take full validators set record from beacon node, by rpc-request to the beacon node, downloading json-array of validators for certain finalized block.

Just as a reminder, validators set presented as array of json objects with the following fields:

  • pubkey
  • withdrawal_credentials
  • effective_balance
  • slashed
  • activation_eligibility_epoch
  • activation_epoch
  • exit_epoch
  • withdrawable_epoch

We use validators data from given block to compute merkle-tree with poseidon hash function. We need it to verify later that validators in private input circuit are given from the beacon state.

To build a poseidon merkle tree, we use only some of validators fields, which we’d use to determine active validators for the selected epoch. These fields are:

  • pubkey (as a immutable primary key of validator)
  • effective_balance (to compute active validators total balance and attestations balances)
  • slashed (to exclude slashed voters balance when calculate attestations balances)
  • activation_epoch
  • exit_epoch (both to know whether validator active or not)

To merkelize this leaf, we need 6 poseidon hashes, as implemented here:

https://github.com/Resolfinity/recursive-circuits-exps/blob/main/poseidon-8/main.cpp#L89

To merkelize full tree of height 21 (fits current validators amount), we need 2**21 hashes, although actually validators count is not much more than 2**20 and we can use precomputed zero hashes for all levels of the tree, and in reality we have to compute 1.5+ millions of poseidon hashes.

Hence, given validators input, and previous poseidon (trusted) merkle root, we prove that validators in the input are trusted data, making

1.5 millions of validators (6 hashes of leaf + 21 hash of merkle tree height) = 1.5 millions * 27 = 40.5 millions of poseidon hashes.

Proof of changed/added validators

Following the main algorithm, described above, we have to have full validators set in the finalized checkpoint, since newly added and updated validators participate as attestations voters.

At that moment, we have off-chain validators set from OLD checkpoint, we send this set as private input, and by computation of the poseidon root, we can trust in this validators set.

Therefore, to have trusted validators set of NEW finalized checkpoint, we can use OLD validators set and add/update validators in this set.

We take new validators set in NEW checkpoint from beacon node rpc, compare with previous validators set from OLD checkpoint, hence we have list of changed/updated validators.

How many validators changed between two consequent epochs

We took random state from few consequent epochs and detected changed/updated validators, here what we got:

epochnext epochinterval, epochschanged validatorsnew validatorstotal updated/ addedvalidators/ epoch
2661732662235011471729287657
26622326632310027511441419241

This is just a sample, in general about 50 validators changed/added during one epoch period.

Validators update algorithm

Prove that only updated/added validators caused validators root update

At this moment, we have

  • trusted/proved old checkpoint state root
  • trusted/proved validators set
  • trusted/proved poseidon validators tree root
  • list of changed/added validators

We have to prove, that only validators from proved known validators set and newly added validators have changed new checkpoint validators record root in checkpoint state.

We can do it in the following way. Let’s we have OLD state that looks like this:

Let we have few changed validators in the NEW checkpoint from the OLD one:

First we create merkle-proof for 1st unchanged validator’s old state (here validator3, and siblings)

Then we compute updated validator leaf, and use merkle-proof siblings (validator4, 1-2, 5-6-7-8) to recalculate validators root

It’s worth notion that new validators root at that moment is virtual, and could be not presented in real state.

Now, we have merkle tree with first updated validator, and virtually updated nodes (validator itself, nodes 3-4, 1-2-3-4, and validators root)

Now we can compute merkle-proof of the next validator to be changed (validator 5), using tree, updated with validator3.

We repeat same operations (recompute validator 5 leaf, recompute merkle root using siblings and updated leaf), hence we have new virtual validators root, changed by 2 updated validators.

We repeat these operations until all updated validators were counted. We treat newly added validators as updated from zero-leaves, according to the beacon merkelization rules.

Finally, after applying all of the changes, we have updated validators root, and are able to prove that this validators root is a part of the NEW finalized checkpoint state (by one more additional merkle-proof)

This way, we proved that we know changed/updated validators, and only these validators were changed/updated between OLD and NEW checkpoints.

How to implement it

  • get updated/added validators from state by rpc-requests for old and new state, and running script which detects validators changes (offline)
  • pre-compute sequential merkle-proofs of validators set mutation by applying updated validators one by one
  • use changed validators and merkle-proofs as circuit inputs
  • verify merkle-proofs inside the circuit
  • assert recomputed validators root are part of the NEW checkpoint being finalized

Computations needed inside the circuit

Each validator from beacon state consists of 8 fields, hence to merkelize new updated leaf, we have to do 6 sha256 operations inside the circuit, as it made here:

https://github.com/Resolfinity/recursive-circuits-exps/blob/main/poseidon-8/main.cpp#L139-L147

For each changed validator we have to verify merkle-proof for the tree of height 21.

Therefore, for each changed validator we have to perform 27 hashes.

(in reality validators tree has height 40, but current validators set has common pre-root of height 21, then we use pre-computed zero-hashes up to heigth 40, once for all validators proof).

Real computation needed depends on the changed validators count, which depends of delay between OLD and NEW finalized epoch.

E.g. using fact that about 50 validators are changed between two epochs, to fill the N hours gap:

E.g if we assume we are able to produce whole proof with 3 hours gap:

1 hour = 10 epochs, for N hours = N*10 epochs we have N*10*50 changed validators.

To do merklelization and validators set updates verification, we have to do

N*10*50 validators * 27 hashes =(plus one for 21 level to real 40 level root)

In case we can not compute final proof in 3 hours, we have more validators changed, which could lead to the more proof computation time, which can lead to more validators changed, and so on.

Where we are now

At that point, we have the following:

  • trusted poseidon root of OLD validators state
  • trusted beacon state root of OLD state
  • NEW checkpoint root, to be finalized
  • list of OLD validators, used as input, could be verified by re-computing all poseidon tree and assertion computed root equals to trusted (hence we prove validators input)
  • list of changed validators and merkle-proofs of validators set, which can be proved by merkle-proof verification and assertion that updated validators set is a part of the NEW checkpoint being finalized

Update poseidon validators root

Since we have old proved validators and updated validators, we can update poseidon root, using intermediate sequencial merkle-proofs, similar to the sha256 validators updates, described above.

To recompute poseidon root, we have to

  • recompute new poseidon leafs, using updated validators input = 6 poseidon hashes
  • pre-compute merkle-proofs of sequential tree updates (offchain) and use these proofs as inputs
  • verify these merkle proofs inside the circuit (21 hash * validators changes)
  • assert new poseidon root is equals to the precomputed root, used as public input.

E.g. for 3 hours delay = 3*10epochs*50validators = 1500 validators updated, we have to perform

1500 * (6 + 21 hashes) = 40500 hashes

Shuffling

To avoid 50M sha-256 computations, we propose to use pre-shuffled array of validators and use OLD validators input as array of shuffled validators.

Since we have about 1.5 millions of validators, and only 1 millions are active (and being shuffled), we compose validators input as an array consists of shuffled active validators in the beginning, and other not-active validators in the end.

We use also array of mappings between the shuffled validators input and corresponding real validator index in the beacon state, since we have to compute validators poseidon tree, using not shuffled validators list.

The idea is following:

To verify poseidon root, we iterate over the mapping input, taking validators from shuffled validators input. When we successfully verified all the validators are in the trusted poseidon tree, we also proved that all validators in the shuffled validators input are unique (since poseidon tree is representation of the beacon validators set, which has unique validators by design).

Hence, we can trust shuffled validators input, assuming that shuffling perfomed correctly.

We can easily compute attestation committees sizes, and take parts of shuffled input as set of attestations participants.

If we proof that attestation has signature, aggregated from given attesters, it means that attesters are from state (we can not fake attestations with millions of voters).

Although we are not sure how exactly will we verify attestations, we can be sure, that we have shuffled validators set, consists of unique validators, presented in the NEW finalized checkpoint.

One more piece is missing

We would be thrilled to announce in this post that we’ve successfully proved the full Ethereum consensus, and it’s now available for use. However, as is often the case, there are some challenges that need to be addressed.

For the final proofing process, we are required to write attestation verification circuits on the BLS curve. This task is feasible, but the sheer volume of attestations, represented as 1000 minutes, means that single-threaded proofing could take almost 17 hours. Ideally, we would employ parallel proof building; however, currently, no alternative curve exists that can seamlessly integrate with the BLS curve. This may seem like a dead end, but it’s not. It’s also important to note that the BLS curve was generated too.

Thank you for reading! Please, don’t hesitate to share your opinion on Twitter/X or Telegram! We wish you only valid proofs!

Authors: Maxim Manylov, Vadim Makovsky