-
Notifications
You must be signed in to change notification settings - Fork 391
Rework batcher concurrency #2017
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…aligned_layer into rework_batcher_concurrency
Co-authored-by: MauroFab <[email protected]>
crates/batcher/Cargo.toml
Outdated
| ciborium = "=0.2.2" | ||
| priority-queue = "2.1.0" | ||
| reqwest = { version = "0.12", features = ["json"] } | ||
| dashmap = "6.0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove dependency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
MarcosNicolau
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, the general code is much easier to follow through.
Rework batcher concurrency
This PR overhauls the concurrency of the batcher, to optimize proofs processing and avoid race conditions
Description
Two locks are added, one for the batcher and one for the user states. The key things that are done are:
The biggest change is in the batch creation:
Notice we tolerate that the user state to be temporarily inconsistent with the queue until we confirm the block. This means that we are a bit stricter than we could with proofs the user send in parallel until we confirm the posting. (we assume the proofs is still in the queue and is not paid, so the user may need a bit more of spare balance and cannot use arbitrary fees, since we ask for the proofs in the batch to have the same or lower fee if have a bigger nonce)
For proof submission, the general idea is:
The added complexity is handling the case of a full queue. Since we cannot be sure that we can take a user lock after the batch lock, what we do is:
This mechanism avoids deadlocks, since it may happen that the "candidate" for eviction has it's state locked.
As a downside, it may be imprecise, since the user may need to bid more than N proofs, but it's not critical.
Another approach would be to briefly take the batch lock, peek to see if the queue is full, drop it and try to get the lock of the user with the least amount of fees. But this may lead to some edge cases that are harder to handle.
*The only exception to this rule is when we got a failure on sending a batch, in this scenario, to recover, we need to lock all the users states since the queue is finite and we may need to evict them and update their nonces. We don't actually take all the locks, and we may only need a couple. But to avoid a deadlock, we added a flag to avoid processing more users when a recovery is in progress which works in a similar manner. In the rare event that the lock we need is taken, the user task will timeout in 15s and release it for the restoration task
Type of change
Please delete options that are not relevant.