Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eth call support #3100

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft

eth call support #3100

wants to merge 4 commits into from

Conversation

chirag-parmar
Copy link
Contributor

No description provided.

@bhartnett
Copy link
Contributor

The approach you've taken here is interesting.

Unfortunately, it will only work if the RPC provider returns the correct access list. We don't want to trust the RPC provider and so we can't really trust that they will provide a correct access list. They could for example provide you with an access list that will cause the eth_call result to be completely different, even though you are verifying the account and storage proofs of each account and slot there is no way to verify that you are looking up the correct pre-state before the transaction is executed.

I think using the access list purely as a performance optimization would be ok as long as the EVM itself still looks up the correct state during transaction execution. In this scenarios the worst that the RPC provider could do is a DOS by sending you a very large fake access list but since the RPC provider can already effectively DOS your application by not responding to requests this is likely acceptable.

@bhartnett
Copy link
Contributor

Unfortunately, it will only work if the RPC provider returns the correct access list. We don't want to trust the RPC provider and so we can't really trust that they will provide a correct access list. They could for example provide you with an access list that will cause the eth_call result to be completely different, even though you are verifying the account and storage proofs of each account and slot there is no way to verify that you are looking up the correct pre-state before the transaction is executed.

You might be able to solve this problem by building up the access list yourself by executing the in memory EVM multiple times, each time collecting more of the access list keys and state until eventually executing the transaction against the full state which will give you the result for eth_call.

Here is the algorithm:

- Fetch code to be executed
- while the evm execution is not successful or the touched account,code,slots have changed since the last execution:
    - execute the code using in memory EVM
    - collect the touched account,code,slots from the evm state
    - fetch all touched account,code,slots concurrently and pass into a new evm state

The last execution will be successful and should be using the correct keys.

You can use the makeMultiKeys function in ledger.nim to collect the touched account keys and storage slots from the EVM state after executing the transaction. You might also be able to use the accessList functions for this.

Comment on lines +255 to +263
# lcProxy.proxy.rpc("eth_getBlockByNumber") do(
# blockTag: BlockTag, fullTransactions: bool
# ) -> Opt[BlockObject]:
# lcProxy.getBlockByTag(blockTag)
#
# lcProxy.proxy.rpc("eth_getBlockByHash") do(
# blockHash: Hash32, fullTransactions: bool
# ) -> Opt[BlockObject]:
# lcProxy.blockCache.getPayloadByHash(blockHash)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the plan here now with the removal of the block cash?

"Recent" blocks were coming in and being stored in that cache. Now that you only have the headers, is the idea to request blocks over the regular EL JSON-RPC API?

Also, with the headers only cache, you are already kind of storing those headers via the consensus light client, albeit not directly addressable by hash.

Copy link
Contributor Author

@chirag-parmar chirag-parmar Mar 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Recent" blocks were coming in and being stored in that cache

These were blocks were directly imported through the p2p layer. Their verification defeats the purpose of a light client. I am not sure why we were doing that before.

Now that you only have the headers, is the idea to request blocks over the regular EL JSON-RPC API?

Yes, the idea is that exactly. The idea is to save on bandwidth, first by not downloading consensus block data and second by downloading the whole block on a need-only basis. Furthermore, I expect the state to be queried more often than the transactions.

you are already kind of storing those headers via the consensus light client, albeit not directly addressable by hash

I looked into the eth2 code base and it didn't seem like the light client stored the update (LightClientOptimisticUpdate and LightClientFinalityUpdate) headers anywhere. But it is possible that I missed looking at some code. Could you please point me to it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why we were doing that before.

I think they were there because before capella fork the LightClientHeader did not hold the ExecutionPayloadHeader yet. And thus the only way to have that data (and all the roots to verify with) was by having the recent blocks.

Now (well, since capella) it indeed makes much less sense to get all that data as it probably was/is the main bandwidth consumer.

I looked into the eth2 code base and it didn't seem like the light client stored the update (LightClientOptimisticUpdate and LightClientFinalityUpdate) headers anywhere. But it is possible that I missed looking at some code. Could you please point me to it?

No, you're right about this. I was mixing it up with something else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants