-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fluffy: Portal EVM call #3119
Merged
Merged
Fluffy: Portal EVM call #3119
Changes from all commits
Commits
Show all changes
38 commits
Select commit
Hold shift + click to select a range
02d8569
Start using Nimbus EVM.
bhartnett 43472d5
Implement proof of concept implementation using Nimbus EVM.
bhartnett 1164a11
Cleanup.
bhartnett 86d0d82
Make CommonRef db initialization configurable.
bhartnett 4d2f21b
Return error string. Use txFrame dispose.
bhartnett 0cc05d8
Make state lookups concurrent.
bhartnett 602c815
Move equals into multi_keys.nim.
bhartnett 497303d
Merge branch 'master' into fluffy-evm
bhartnett 64a69bb
Add some documentation and comments. Tests for multikeys equals.
bhartnett 3102ebe
Add logging.
bhartnett 55fb634
Disable linking to RocksDb in Fluffy.
bhartnett 63a1268
Merge branch 'master' into fluffy-evm
bhartnett 76e0ed2
Fix issue discovered when calling another contract from within an exi…
bhartnett d3b27b2
Fix copyright.
bhartnett 52ed102
Add to address to fetched code.
bhartnett 9496662
Merge branch 'master' into fluffy-evm
bhartnett c880d7c
Merge branch 'master' into fluffy-evm
bhartnett 03fa696
Move PortalEvm into eth rpc api.
bhartnett e196a0a
Merge branch 'master' into fluffy-evm
bhartnett cd73192
Remove existing witness code.
bhartnett 99ced97
Implement collection of witness keys using ordered list.
bhartnett 33c53e2
Merge branch 'improve-witness-keys-collection' into fluffy-evm
bhartnett 7838a4e
Improve implementation. No longer using multikeys.
bhartnett ea98562
Use witness keys.
bhartnett 52d301a
Merge branch 'improve-witness-keys-collection' into fluffy-evm
bhartnett f4461ef
Use OrderedTable instead of OrderedTableRef.
bhartnett cbe98bd
Merge branch 'improve-witness-keys-collection' into fluffy-evm
bhartnett 0f6b1b7
Use latest witness keys changes.
bhartnett d065691
Merge branch 'master' into fluffy-evm
bhartnett c78a876
Remove unneeded rocksdb flag after in memory db fix.
bhartnett a0cd8f2
Improvements.
bhartnett 705c1f2
Merge branch 'improve-witness-keys-collection' into fluffy-evm
bhartnett 34f6e2f
Add set storage test.
bhartnett e05492d
Merge branch 'improve-witness-keys-collection' into fluffy-evm
bhartnett fa1910d
Clear witness keys after each call.
bhartnett 6c49854
Implement second state fetch method and put behind a boolean flag. Im…
bhartnett f7ec7d6
Cleanup test.
bhartnett 8397d9c
Merge branch 'master' into fluffy-evm
bhartnett File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,256 @@ | ||
# Fluffy | ||
# Copyright (c) 2025 Status Research & Development GmbH | ||
# Licensed and distributed under either of | ||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT). | ||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0). | ||
# at your option. This file may not be copied, modified, or distributed except according to those terms. | ||
|
||
{.push raises: [].} | ||
|
||
import | ||
std/sets, | ||
stew/byteutils, | ||
chronos, | ||
chronicles, | ||
stint, | ||
results, | ||
eth/common/[hashes, addresses, accounts, headers], | ||
../../execution_chain/db/ledger, | ||
../../execution_chain/common/common, | ||
../../execution_chain/transaction/call_evm, | ||
../../execution_chain/evm/[types, state, evm_errors], | ||
../network/history/history_network, | ||
../network/state/[state_endpoints, state_network] | ||
|
||
from web3/eth_api_types import TransactionArgs | ||
|
||
export | ||
results, chronos, hashes, history_network, state_network, TransactionArgs, CallResult | ||
|
||
logScope: | ||
topics = "portal_evm" | ||
|
||
# The Portal EVM uses the Nimbus in-memory EVM to execute transactions using the | ||
# portal state network state data. Currently only call is supported. | ||
# | ||
# Rather than wire in the portal state lookups into the EVM directly, the approach | ||
# taken here is to optimistically execute the transaction multiple times with the | ||
# goal of building the correct access list so that we can then lookup the accessed | ||
# state from the portal network, store the state in the in-memory EVM and then | ||
# finally execute the transaction using the correct state. The Portal EVM makes | ||
# use of data in memory during the call and therefore each piece of state is never | ||
# fetched more than once. We know we have found the correct access list if it | ||
# doesn't change after another execution of the transaction. | ||
# | ||
# The assumption here is that network lookups for state data are generally much | ||
# slower than the time it takes to execute a transaction in the EVM and therefore | ||
# executing the transaction multiple times should not significally slow down the | ||
# call given that we gain the ability to fetch the state concurrently. | ||
# | ||
# There are multiple reasons for choosing this approach: | ||
# - Firstly updating the existing Nimbus EVM to support using a different state | ||
# backend (portal state in this case) is difficult and would require making | ||
# non-trivial changes to the EVM. | ||
# - This new approach allows us to look up the state concurrently in the event that | ||
# multiple new state keys are discovered after executing the transaction. This | ||
# should in theory result in improved performance for certain scenarios. The | ||
# default approach where the state lookups are wired directly into the EVM gives | ||
# the worst case performance because all state accesses inside the EVM are | ||
# completely sequential. | ||
|
||
const EVM_CALL_LIMIT = 10000 | ||
|
||
type | ||
AccountQuery = object | ||
address: Address | ||
accFut: Future[Opt[Account]] | ||
|
||
StorageQuery = object | ||
address: Address | ||
slotKey: UInt256 | ||
storageFut: Future[Opt[UInt256]] | ||
|
||
CodeQuery = object | ||
address: Address | ||
codeFut: Future[Opt[Bytecode]] | ||
|
||
PortalEvm* = ref object | ||
historyNetwork: HistoryNetwork | ||
stateNetwork: StateNetwork | ||
com: CommonRef | ||
|
||
func init(T: type AccountQuery, adr: Address, fut: Future[Opt[Account]]): T = | ||
T(address: adr, accFut: fut) | ||
|
||
func init( | ||
T: type StorageQuery, adr: Address, slotKey: UInt256, fut: Future[Opt[UInt256]] | ||
): T = | ||
T(address: adr, slotKey: slotKey, storageFut: fut) | ||
|
||
func init(T: type CodeQuery, adr: Address, fut: Future[Opt[Bytecode]]): T = | ||
T(address: adr, codeFut: fut) | ||
|
||
proc init*(T: type PortalEvm, hn: HistoryNetwork, sn: StateNetwork): T = | ||
let config = | ||
try: | ||
networkParams(MainNet).config | ||
except ValueError as e: | ||
raiseAssert(e.msg) # Should not fail | ||
except RlpError as e: | ||
raiseAssert(e.msg) # Should not fail | ||
|
||
let com = CommonRef.new( | ||
DefaultDbMemory.newCoreDbRef(), | ||
taskpool = nil, | ||
config = config, | ||
initializeDb = false, | ||
) | ||
|
||
PortalEvm(historyNetwork: hn, stateNetwork: sn, com: com) | ||
|
||
proc call*( | ||
evm: PortalEvm, | ||
tx: TransactionArgs, | ||
blockNumOrHash: uint64 | Hash32, | ||
optimisticStateFetch = true, | ||
): Future[Result[CallResult, string]] {.async: (raises: [CancelledError]).} = | ||
let | ||
to = tx.to.valueOr: | ||
return err("to address is required") | ||
header = (await evm.historyNetwork.getVerifiedBlockHeader(blockNumOrHash)).valueOr: | ||
return err("Unable to get block header") | ||
# Start fetching code in the background while setting up the EVM | ||
codeFut = evm.stateNetwork.getCodeByStateRoot(header.stateRoot, to) | ||
|
||
debug "Executing call", to, blockNumOrHash | ||
|
||
let txFrame = evm.com.db.baseTxFrame().txFrameBegin() | ||
defer: | ||
txFrame.dispose() # always dispose state changes | ||
|
||
# TODO: review what child header to use here (second parameter) | ||
let vmState = BaseVMState.new(header, header, evm.com, txFrame) | ||
|
||
var | ||
# Record the keys of fetched accounts, storage and code so that we don't | ||
# bother to fetch them multiple times | ||
fetchedAccounts = initHashSet[Address]() | ||
fetchedStorage = initHashSet[(Address, UInt256)]() | ||
fetchedCode = initHashSet[Address]() | ||
|
||
# Set code of the 'to' address in the EVM so that we can execute the transaction | ||
let code = (await codeFut).valueOr: | ||
return err("Unable to get code") | ||
vmState.ledger.setCode(to, code.asSeq()) | ||
fetchedCode.incl(to) | ||
debug "Code to be executed", code = code.asSeq().to0xHex() | ||
|
||
var | ||
lastWitnessKeys: WitnessTable | ||
witnessKeys = vmState.ledger.getWitnessKeys() | ||
callResult: EvmResult[CallResult] | ||
evmCallCount = 0 | ||
|
||
# Limit the max number of calls to prevent infinite loops and/or DOS in the | ||
# event of a bug in the implementation. | ||
while evmCallCount < EVM_CALL_LIMIT: | ||
debug "Starting PortalEvm execution", evmCallCount | ||
|
||
let sp = vmState.ledger.beginSavepoint() | ||
callResult = rpcCallEvm(tx, header, vmState) | ||
inc evmCallCount | ||
vmState.ledger.rollback(sp) # all state changes from the call are reverted | ||
|
||
# Collect the keys after executing the transaction | ||
lastWitnessKeys = ensureMove(witnessKeys) | ||
witnessKeys = vmState.ledger.getWitnessKeys() | ||
vmState.ledger.clearWitnessKeys() | ||
|
||
try: | ||
var | ||
accountQueries = newSeq[AccountQuery]() | ||
storageQueries = newSeq[StorageQuery]() | ||
codeQueries = newSeq[CodeQuery]() | ||
|
||
# Loop through the collected keys and fetch the state concurrently. | ||
# If optimisticStateFetch is enabled then we fetch state for all the witness | ||
# keys and await all queries before continuing to the next call. | ||
# If optimisticStateFetch is disabled then we only fetch and then await on | ||
# one piece of state (the next in the ordered witness keys) while the remaining | ||
# state queries are still issued in the background just incase the state is | ||
# needed in the next iteration. | ||
var stateFetchDone = false | ||
for k, v in witnessKeys: | ||
let (adr, _) = k | ||
|
||
if v.storageMode: | ||
let slotIdx = (adr, v.storageSlot) | ||
if slotIdx notin fetchedStorage: | ||
debug "Fetching storage slot", address = adr, slotKey = v.storageSlot | ||
let storageFut = evm.stateNetwork.getStorageAtByStateRoot( | ||
header.stateRoot, adr, v.storageSlot | ||
) | ||
if not stateFetchDone: | ||
storageQueries.add(StorageQuery.init(adr, v.storageSlot, storageFut)) | ||
if not optimisticStateFetch: | ||
stateFetchDone = true | ||
elif adr != default(Address): | ||
doAssert(adr == v.address) | ||
|
||
if adr notin fetchedAccounts: | ||
debug "Fetching account", address = adr | ||
let accFut = evm.stateNetwork.getAccount(header.stateRoot, adr) | ||
if not stateFetchDone: | ||
accountQueries.add(AccountQuery.init(adr, accFut)) | ||
if not optimisticStateFetch: | ||
stateFetchDone = true | ||
|
||
if v.codeTouched and adr notin fetchedCode: | ||
debug "Fetching code", address = adr | ||
let codeFut = evm.stateNetwork.getCodeByStateRoot(header.stateRoot, adr) | ||
if not stateFetchDone: | ||
codeQueries.add(CodeQuery.init(adr, codeFut)) | ||
if not optimisticStateFetch: | ||
stateFetchDone = true | ||
|
||
if optimisticStateFetch: | ||
# If the witness keys did not change after the last execution then we can | ||
# stop the execution loop because we have already executed the transaction | ||
# with the correct state. | ||
if lastWitnessKeys == witnessKeys: | ||
break | ||
else: | ||
# When optimisticStateFetch is disabled and stateFetchDone is not set then | ||
# we know that all the state has already been fetched in the last iteration | ||
# of the loop and therefore we have already executed the transaction with | ||
# the correct state. | ||
if not stateFetchDone: | ||
break | ||
|
||
# Store fetched state in the in-memory EVM | ||
for q in accountQueries: | ||
let acc = (await q.accFut).valueOr: | ||
return err("Unable to get account") | ||
vmState.ledger.setBalance(q.address, acc.balance) | ||
vmState.ledger.setNonce(q.address, acc.nonce) | ||
fetchedAccounts.incl(q.address) | ||
|
||
for q in storageQueries: | ||
let slotValue = (await q.storageFut).valueOr: | ||
return err("Unable to get slot") | ||
vmState.ledger.setStorage(q.address, q.slotKey, slotValue) | ||
fetchedStorage.incl((q.address, q.slotKey)) | ||
|
||
for q in codeQueries: | ||
let code = (await q.codeFut).valueOr: | ||
return err("Unable to get code") | ||
vmState.ledger.setCode(q.address, code.asSeq()) | ||
fetchedCode.incl(q.address) | ||
except CatchableError as e: | ||
# TODO: why do the above futures throw a CatchableError and not CancelledError? | ||
raiseAssert(e.msg) | ||
|
||
callResult.mapErr( | ||
proc(e: EvmErrorObj): string = | ||
"EVM execution failed: " & $e.code | ||
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,3 +15,5 @@ | |
--styleCheck:usages | ||
--styleCheck:error | ||
--hint[Processing]:off | ||
|
||
-d:"stateless" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When awaiting these futures here the internal future type raises a CatchableError when the async proc called actually raises a CancelledError. Not sure what is causing this and how I should handle it.