-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Batch proposal spend limits #3471
base: staging
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope it helps
let process = self.ledger.vm().process(); | ||
|
||
// Deserialize the transaction. If the transaction exceeds the maximum size, then return an error. | ||
let transaction = match transaction { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deserialization is extremely expensive. I would consider moving this calculation into ledger.rs:check_transaction_basic
, where we already deserialize. Perhaps that function can return the compute cost? Or some other design?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did consider it and should revisit the idea but intuitively we might want to avoid coupling the cost calculation with check_transaction_basic
as it's only called in propose_batch
and not in process_batch_propose_from_peer
, at least not directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah good point... That is unfortunate, because in the case of process_batch_propose_from_peer
, we might be retrieving the transmission from disk, in which case we'll most certainly have to incur the deserialization cost.
Maybe if we let fn compute_cost
cost take in a Transaction
instead of Data<Transaction>
, it can at least be made explicit. For our own proposal we call it from within check_transaction_basic
, for incoming proposals we'll need to deserialize before calling it.
And if it turns out to be a bottleneck, we can always refactor the locations where we deserialize more comprehensively, and potentially create a cache for the compute_cache if needed.
78db712
to
cd43ece
Compare
This commit also modifies how transmissions are included in a batch. They are only drained from the workers once their validity has been checked.
7b3f28f
to
09b1eff
Compare
09b1eff
to
f981a82
Compare
5ef6a17
to
ab66df9
Compare
|
||
/// Computes the execution cost in microcredits for a transaction. | ||
fn compute_cost(&self, _transaction_id: N::TransactionID, _transaction: Data<Transaction<N>>) -> Result<u64> { | ||
// Return 1 credit so this function can be used to test spend limits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 10_000_000 microcredits is 10 credits
continue 'inner; | ||
|
||
// Reinsert the transmission into the worker, O(n). | ||
worker.shift_insert_front(id, transmission); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a huge transmission that is 99% of the batch spend limit, isn't your worker never going to create a proposal with a larger utilization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be smarter to pull other transactions that might fill the remaining space of the proposal, with smaller transactions.
This also might be an opportune moment to finally implement the use of priority_fee
, which was previously not factored into the ordering
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the priority fee should be tackled independently, it'll be a decent change to review:
- need to insert ahead of the transactions_queue (draft PR exists)
- need to insert ahead of the ready queue is a good extra idea.
This PR introduces spend limit checks on batch proposals at construction and prior to signing. This requires some changes to batch construction: the workers are now drained only after the transmissions have been checked for inclusion in a batch. This, to avoid reinserting transmissions into the memory pool once the spend limit is surpassed. It was also necessary to expose a
compute_cost
function on theLedgerService
trait—its internals might still be moved into snarkVM.This changeset will need to be refined and tested, hence the draft. CI is currently expected to fail.
Related PRs:
BLOCK_SPEND_LIMIT
snarkVM#2565 (previous discussion)