-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support parametrized pytest tests #55
Comments
Support for parameterized tests is present in other ports of the approval test tool (see for example https://stackoverflow.com/questions/49035010/approvaltests-with-testcase-nunit-use-the-same-approval-file how it's done in C#). We should have this in the python version too. It's just a question of working out how to make it pythonic and then adding it. |
The recent addition of |
You totally saved me with this issue! @pytest.fixture
def verify(request):
def verify(data, reporter=None):
verify_with_namer(data, get_scenario_namer(request.node.name), reporter)
return verify Unfortunately, this causes duplication in the test name. To fix it, and still match the names printed by pytest, we need to modify |
That looks neat! How about putting this code in approvaltests/core/scenario_namer.py? Do you think you could also add a test to tests/test_scenarios.py? I'd like to better understand how this code works and what changes you need in ScenarioNamer. Emily |
This commit adds support for parameterized tests in Pytest. Closes approvals#55
PR is up. I added a comment on the |
Hello there! |
Any idea when this might be supported? Or is the official solution to use namers? |
Honestly I thought it was already fixed. I know there's been quite a bit of development work done on python approvals lately. @isidore can you comment? |
Through the use of 'namers' we can make this work, but I wasn't sure if that was the expected way of supporting parametrization or if the pytest integration would do it automagically. |
I see some nice examples from @isidore and @SwamyDev from the last few months. Thanks both. ApprovalTests.Python/tests/test_scenarios.py Lines 24 to 28 in 18aa8a2
Would it be correct that this is the "official solution"? ( I like it) AFAICT there is nothing currently that would do this naming automagically. Given some of the more complex cases (below) that makes sense - seems non-trivial. But, I wonder if it would be useful to throw a warning or error for parameterized tests (pytests) that don't provider a namer - I found it surprising on my first use of approvaltests with a parameterized test that only a single pair of received/approved files were generated. Some more complex cases:
|
Autonaming ought to be trivial with pytest since pytest generates unique names for you in the test node's 'nodeid' field. They're not always pretty names, so a fallback to using a namer would probably be important for some users. Nevertheless, everything we need to do autonaming is already available (as far as I know...I don't know if any situations where pytest won't generate a unique ID, and I don't know how it would operate if it couldn't). See for example how this is done in godkjenn's pytest plugin. |
@abingham - oh that's great. Thanks for the knowledge. Autonaming would be great, but manual works for me for now. I'd be happy to review/test autonaming if that'd would be helpful |
It appears that parametrized pytest tests aren't properly supported in approval. Rather than treating each invocation of the parametrized test separately, approval treats them identically, i.e. it looks for the "golden" output in the same file for each parametrization.
Pytest itself allows each parametrization to pass or fail independently, so I had hoped approval would as well.
My current motivation for this is that I have a small compiler, and I want to use approval for verifying compilation results for a corpus of inputs. I'd like to just have a directory where I can drop new inputs and have them automatically picked up by my tests without needing to write a new test method, and parametrization seems like the way to go here.
The text was updated successfully, but these errors were encountered: