Skip to content

Conversation

@MarcosNicolau
Copy link
Member

@MarcosNicolau MarcosNicolau commented May 30, 2025

Description

Adds a limit for the number of proofs to fetch. When the fetcher reaches that limit or when the proofs for the batch surprass that limit the fetching stops and the last aggregated block us updated to the block number of that log so that the next aggregation starts at that block.

Considerations:

Currently, we are limited by the blob size:

  • Each blob has a capacity of (4096 * 32) = 131.072 bytes
  • But, since we need to pad it with a 0x0 byte to not surpass the field modulus we are left with (4096 * 31) = 126.976
  • Each proof commitments takes 32 bytes hash

So we can aggregate as much proofs as 126.976 / 32 = 3968 per blob.

We could increase the capacity by:

  • Running the aggregator more frequently
  • Adding the logic to send a vector of blobs instead of only one

Type of change

Please delete options that are not relevant.

  • New feature

Checklist

  • “Hotfix” to testnet, everything else to staging
  • Linked to Github Issue
  • This change depends on code or research by an external entity
    • Acknowledgements were updated to give credit
  • Unit tests added
  • This change requires new documentation.
    • Documentation has been added/updated.
  • This change is an Optimization
    • Benchmarks added/run
  • Has a known issue
  • If your PR changes the Operator compatibility (Ex: Upgrade prover versions)
    • This PR adds compatibility for operator for both versions and do not change batcher/docs/examples
    • This PR updates batcher and docs/examples to the newer version. This requires the operator are already updated to be compatible

@MarcosNicolau MarcosNicolau self-assigned this May 30, 2025
Copy link
Collaborator

@JuArce JuArce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a case where you can DoS the aggregation mode
Let's suppose total_proofs_limit = X
If you have a batch with n proofs and n > X, the aggregation mode won't process the batch and every time it starts, it will try to process that batch, being stuck on that batch

@MarcosNicolau
Copy link
Member Author

There is a case where you can DoS the aggregation mode
Let's suppose total_proofs_limit = X
If you have a batch with n proofs and n > X, the aggregation mode won't process the batch and every time it starts, it will try to process that batch, being stuck on that batch

You are right. I can think of two ways of solving this issue:

  1. Save the last processed block + the index of the last processed proof: this way we don't have to process the whole batch at once, we can do it in pieces.
  2. Make sure the batcher limit coincides with this one. Currently the batcher limit per batch is set to be 3000 so we should be covered.

@MauroToscano MauroToscano added this pull request to the merge queue Jun 3, 2025
Merged via the queue into staging with commit 703f766 Jun 3, 2025
3 checks passed
@MauroToscano MauroToscano deleted the feat/aggregation-mode-limit-number-of-proofs branch June 3, 2025 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants