-
Notifications
You must be signed in to change notification settings - Fork 170
feat(benchmark): add benchmark_test
and benchmark_state_test
test type
#1945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
641036c
to
af00ec2
Compare
af00ec2
to
de7f485
Compare
There are some issue in generating the fixture. I compare to the newly created fixture, and the size is much larger than the original one. This should not happen and there should be the same content, so the same size. But this is not a big problem now. The major issue now is to resolve the failing test in CI, which I could not reproduce now locally. |
This can come in handy for benchmark tests as basically they force the consumption of all the gas available. And that condition forces us to implement padding techniques to consume EXACTLY all the gas available in a block. When in reality, for a benchmark, we don't care about this at all. |
@CPerezz I think this is still necessary for Nethermind team (Increasing gas limit) and zkEVM team (proving the entire block)? For gas limit testing, I am not sure if they can only run 1 tx and then derive the entire block execution time from it |
But you can emit a warning if needed. Why does it need to be a failure not spending ALL the gas exactly? I agree it has to be within a bound. Sure. But to the unit in precision is really different. Specially when you have to account for mem expansion and other costs. It's almost impossible to not need padding. I'm not advocating to remove this completely. But to relax it maybe. Or at least, it would be useful to know why does it need to fail specifically? When and Why was this introduced? |
@CPerezz Thank you for explanation, it is very clear! I will review the features included again and discuss with the team. As you see this is still a draft and we welcome any feedback, we also want to know what does stateless client team need for benchmarking, what's your consideration when benchmarking? |
@LouisTsai-Csie So I'm just speaking in regards of "State bottlenecks" project. Which is within the stateless-consensus team. Our goal is to measure how different client impls behave when under heavy load and different state sizes among other things. For that, we need these kind of benchmarks. But it results quite tricky to match perfectly the gas spent. And it's not required at all to be spent. 1% of wiggle room is enough to consider the benchmark useful even if it doesn't spend all the gas of the block. |
pre: Alloc | ||
post: Alloc | ||
tx: Optional[Transaction] = None | ||
blocks: Optional[List[Block]] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re #2112, I think we could have setup_tx
and setup_blocks
perhaps which contain transactions that are specifically part of the benchmark setup.
The main problem I see is that, currently we do pre.fund_eoa
for both (1) accounts that send these setup transactions and (2) accounts that send the actual benchmarking workload transactions, and they are indistinguishable at the moment.
One option could be to add a field to pre.fund_eoa
that indicates whether the account is meant to send setup transactions or workload transactions, so we can fund this transaction only in the setup phase of execute
:
setup_account = pre.fund_eoa(account_type="setup")
Downside being that the test writer needs to be cognizant of this and properly label all accounts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just spitballing here but what if we have context managers manage each phase for benchmark tests?
@pytest.mark.benchmark
def test_some_benchmark(benchmark, pre, blockchain_test):
with benchmark.setup(): # Auto-tagged as setup
setup_contract = pre.deploy_contract(...)
contract_under_test = pre.deploy_contract(code=..., storage=..., stub="...")
setup_acct = pre.fund_eoa()
setup_block = Block(txs=[
Transaction(...),
Transaction(...),
])
with benchmark.execution(): # Auto-tagged as execution
acct1 = pre.fund_eoa()
# for execute remote this is the seed / private key sender?
execution_block = Block(txs=[
Transaction(...),
])
blockchain_test(...)
One possible way I've used this in the past is tracking certain contexts with ContextVar
. This can be reset with every test and could be used in a try / finally
sort of block. Downside (but maybe a plus?) is you also have to be explicit about each phase and this may not always work out to be so deterministic 🤔. These are things that would have to be determined anyway though I think with any sort of phase management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be a very nice solution. If we could make it so that the default context is execution
(or workload
perhaps?) I think that would be great.
de7f485
to
688e861
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After going through the current implementation and thinking about it I think this PR is mostly on the right track.
My suggestions would be:
- We have a single new spec
benchmark_tests
that receivessetup_txs
andworkload_txs
, or agenerator
. - We have multiple generator subclasses all of which subclass
BenchmarkCodeGenerator
and and implementgenerate_setup_txs
andgenerate_workload_txs
(and perhapsdeploy_contracts
). - Internally
benchmark_tests
takessetup_txs
(or callsgenerator.generate_setup_txs()
) and, if any, generates a first setup block, and then takesworkload_txs
(or callsgenerator.generate_workload_txs()
) and puts them in the a different block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm leaning more towards removing benchmark_state
and leaving only benchmark
, because it feels like the state format is heavily constrained by the transaction gas limit cap, and it's simply more work to introduce two different formats and it's also confusing to testers who would have to know which one to use each time.
class BenchmarkCodeGenerator(ABC): | ||
"""Abstract base class for generating benchmark bytecode.""" | ||
|
||
def __init__( | ||
self, | ||
fork: Fork, | ||
attack_block: Bytecode, | ||
setup: Optional[Bytecode] = None, | ||
): | ||
"""Initialize with fork, attack block, and optional setup bytecode.""" | ||
self.fork = fork | ||
self.setup = setup or Bytecode() | ||
self.attack_block = attack_block |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we decide to stick with this kind of abstract class, we can refactor this to be dataclass
.
🗒️ Description
As EIP-7825 is introduced in Fusaka upgrade, most of the legacy test case would fail. This issue add two test wrappers,
benchmark_test
andbenchmark_state_test
, to replace pureblockchain_test
andstate_test
test type.🔗 Related Issues or PRs
Issue #1896
✅ Checklist
tox
checks to avoid unnecessary CI fails, see also Code Standards and Enabling Pre-commit Checks:uvx --with=tox-uv tox -e lint,typecheck,spellcheck,markdownlint
type(scope):
.mkdocs serve
locally and verified the auto-generated docs for new tests in the Test Case Reference are correctly formatted.@ported_from
marker.