-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eth call support #3100
base: master
Are you sure you want to change the base?
eth call support #3100
Conversation
The approach you've taken here is interesting. Unfortunately, it will only work if the RPC provider returns the correct access list. We don't want to trust the RPC provider and so we can't really trust that they will provide a correct access list. They could for example provide you with an access list that will cause the eth_call result to be completely different, even though you are verifying the account and storage proofs of each account and slot there is no way to verify that you are looking up the correct pre-state before the transaction is executed. I think using the access list purely as a performance optimization would be ok as long as the EVM itself still looks up the correct state during transaction execution. In this scenarios the worst that the RPC provider could do is a DOS by sending you a very large fake access list but since the RPC provider can already effectively DOS your application by not responding to requests this is likely acceptable. |
You might be able to solve this problem by building up the access list yourself by executing the in memory EVM multiple times, each time collecting more of the access list keys and state until eventually executing the transaction against the full state which will give you the result for Here is the algorithm:
The last execution will be successful and should be using the correct keys. You can use the |
# lcProxy.proxy.rpc("eth_getBlockByNumber") do( | ||
# blockTag: BlockTag, fullTransactions: bool | ||
# ) -> Opt[BlockObject]: | ||
# lcProxy.getBlockByTag(blockTag) | ||
# | ||
# lcProxy.proxy.rpc("eth_getBlockByHash") do( | ||
# blockHash: Hash32, fullTransactions: bool | ||
# ) -> Opt[BlockObject]: | ||
# lcProxy.blockCache.getPayloadByHash(blockHash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the plan here now with the removal of the block cash?
"Recent" blocks were coming in and being stored in that cache. Now that you only have the headers, is the idea to request blocks over the regular EL JSON-RPC API?
Also, with the headers only cache, you are already kind of storing those headers via the consensus light client, albeit not directly addressable by hash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Recent" blocks were coming in and being stored in that cache
These were blocks were directly imported through the p2p layer. Their verification defeats the purpose of a light client. I am not sure why we were doing that before.
Now that you only have the headers, is the idea to request blocks over the regular EL JSON-RPC API?
Yes, the idea is that exactly. The idea is to save on bandwidth, first by not downloading consensus block data and second by downloading the whole block on a need-only basis. Furthermore, I expect the state to be queried more often than the transactions.
you are already kind of storing those headers via the consensus light client, albeit not directly addressable by hash
I looked into the eth2 code base and it didn't seem like the light client stored the update (LightClientOptimisticUpdate
and LightClientFinalityUpdate
) headers anywhere. But it is possible that I missed looking at some code. Could you please point me to it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why we were doing that before.
I think they were there because before capella fork the LightClientHeader
did not hold the ExecutionPayloadHeader
yet. And thus the only way to have that data (and all the roots to verify with) was by having the recent blocks.
Now (well, since capella) it indeed makes much less sense to get all that data as it probably was/is the main bandwidth consumer.
I looked into the eth2 code base and it didn't seem like the light client stored the update (
LightClientOptimisticUpdate
andLightClientFinalityUpdate
) headers anywhere. But it is possible that I missed looking at some code. Could you please point me to it?
No, you're right about this. I was mixing it up with something else.
No description provided.