Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: batcher DoS if batch with max_batch_size is sent twice #1742

Open
Oppen opened this issue Jan 16, 2025 · 0 comments
Open

bug: batcher DoS if batch with max_batch_size is sent twice #1742

Oppen opened this issue Jan 16, 2025 · 0 comments
Labels
audit batcher issues within aligned-batcher cantina Audit report from Cantina

Comments

@Oppen
Copy link
Collaborator

Oppen commented Jan 16, 2025

Reported by cinderblock in cantina issue #59. Transcript of its description follows:

Batcher submissions will fail if the exact same batch merkle root has already been submitted on-chain because of the revert in AlignedLayerServiceManager.createNewTask(). When the call reverts, the batcher will keep retrying to resubmit the same batch with the same merkle root as long as there are no other proofs that fit into the batcher queue. an attacker can exploit this to block the batcher's queue for all users, taking into consideration that:

batches submitted on-chain and on S3 are limited to 256mb and a single proof is limited to 16mb (see note 1.)
batch queue is ordered from lowest to highest max_fee (as specified by users when submitting a proof to the batcher)
Also the attacker would also need to deposit funds in BatcherPaymentService contract to inflate max_fee, on a successfull attack he can block all batch processing and unlock his funds later on.

steps for the attack:

submit a 16mb proof 16 times, this should cost (0.0013 * 16) ether or ~76$ as the fee amount is dependent on the number of proofs in the queue and size is not taken into consideration.
once the batch has been processed and submitted on-chain, the attacker locks 8 ether in the BatcherPaymentService contract and resubmit the same exact 16mb proof 16 times, this time with max_fee set to 0.5 ether to make sure his proofs get the highest priority in the batcher's queue.
now the queue is dosed because the batcher will be stuck retrying to resubmit the same merkle root as in 1 forever, it fails because of the revert in AlignedLayerServiceManager.createNewTask(). And since the attacker submitted the 16 proofs with 0.5 as max_fee, those 16 proofs will always have priority in the batcher's queue unless someone actually submits a proof with 0.5+1 ether as max_fee.
The attacker sends the same 16mb proof indefinitely and the batcher will eventually run OOM.
notes: 1- the 16mb limit is the default limit in tungstenite-rs lib for a websocket frame, a message can actually be 64mib using multiple frames and therefore reducing cost for an attacker by few 10s of $ but to keep the poc simpler we will be sending 16mb proofs.

A possible solution is adding a salt proof to every batch, generated by the batcher, with the single constraint that salt != 0, adding negligible costs while making the attack expensive.

@Oppen Oppen added the audit label Jan 16, 2025
@Oppen Oppen added batcher issues within aligned-batcher cantina Audit report from Cantina labels Jan 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
audit batcher issues within aligned-batcher cantina Audit report from Cantina
Projects
None yet
Development

No branches or pull requests

1 participant