Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Batch proposal spend limits #3471

Draft
wants to merge 15 commits into
base: staging
Choose a base branch
from

Conversation

niklaslong
Copy link
Collaborator

@niklaslong niklaslong commented Jan 22, 2025

This PR introduces spend limit checks on batch proposals at construction and prior to signing. This requires some changes to batch construction: the workers are now drained only after the transmissions have been checked for inclusion in a batch. This, to avoid reinserting transmissions into the memory pool once the spend limit is surpassed. It was also necessary to expose a compute_cost function on the LedgerService trait—its internals might still be moved into snarkVM.

This changeset will need to be refined and tested, hence the draft. CI is currently expected to fail.

Related PRs:

Copy link
Collaborator

@vicsn vicsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope it helps

let process = self.ledger.vm().process();

// Deserialize the transaction. If the transaction exceeds the maximum size, then return an error.
let transaction = match transaction {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deserialization is extremely expensive. I would consider moving this calculation into ledger.rs:check_transaction_basic, where we already deserialize. Perhaps that function can return the compute cost? Or some other design?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did consider it and should revisit the idea but intuitively we might want to avoid coupling the cost calculation with check_transaction_basic as it's only called in propose_batch and not in process_batch_propose_from_peer, at least not directly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good point... That is unfortunate, because in the case of process_batch_propose_from_peer, we might be retrieving the transmission from disk, in which case we'll most certainly have to incur the deserialization cost.

Maybe if we let fn compute_cost cost take in a Transaction instead of Data<Transaction>, it can at least be made explicit. For our own proposal we call it from within check_transaction_basic, for incoming proposals we'll need to deserialize before calling it.

And if it turns out to be a bottleneck, we can always refactor the locations where we deserialize more comprehensively, and potentially create a cache for the compute_cache if needed.


/// Computes the execution cost in microcredits for a transaction.
fn compute_cost(&self, _transaction_id: N::TransactionID, _transaction: Data<Transaction<N>>) -> Result<u64> {
// Return 1 credit so this function can be used to test spend limits.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 10_000_000 microcredits is 10 credits

continue 'inner;

// Reinsert the transmission into the worker, O(n).
worker.shift_insert_front(id, transmission);
Copy link
Contributor

@raychu86 raychu86 Feb 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a huge transmission that is 99% of the batch spend limit, isn't your worker never going to create a proposal with a larger utilization?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be smarter to pull other transactions that might fill the remaining space of the proposal, with smaller transactions.

This also might be an opportune moment to finally implement the use of priority_fee, which was previously not factored into the ordering

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the priority fee should be tackled independently, it'll be a decent change to review:

  • need to insert ahead of the transactions_queue (draft PR exists)
  • need to insert ahead of the ready queue is a good extra idea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants