问题
the purpose of @mark.incremental is that if one test fails, the tests afterwards are marked as expected to fail.
However, when I use this in conjuction with parametrization I get undesired behavior.
For example, in the case of this fake code:
//conftest.py:
def pytest_generate_tests(metafunc):
metafunc.parametrize("input", [True, False, None, False, True])
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
//test.py:
@pytest.mark.incremental
class TestClass:
def test_input(self, input):
assert input is not None
def test_correct(self, input):
assert input==True
I'd expect the test class to run
test_input on True,
followed by test_correct on True,
followed by test_input on False,
followed by test_correct on False,
folowed by test_input on None,
followed by (xfailed) test_correct on None, etc etc.
Instead, what happens is that the test class
- runs test_input on True,
- then runs test_input on False,
- then runs test_input on None,
- then marks everything from that point onwards as xfailed (including the test_corrects).
What I am assuming is happening is that parametrization takes priority over proceeding through functions in a class. The question is if it is possible to override this behaviour or work around it somehow, as the current situation makes marking a class as incremental completely useless to me.
(is the only way to handle this to copy-paste the code for the class over and over, each time with different parameters? The thought is repulsive to me)
回答1:
The solution to this is described in https://docs.pytest.org/en/latest/example/parametrize.html under the header A quick port of “testscenarios”
This is the code listed there and what the code in conftest.py
is doing is it is looking for variable scenarios
in the test class. When it finds the variable it iterates over each item of scenarios and expects an id
string with which to label the test and a dictionary of 'argnames:argvalues'
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
for scenario in metafunc.cls.scenarios:
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class TestSampleWithScenarios(object):
scenarios = [scenario1, scenario2]
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
You can also modify the function pytest_generate_tests
to accept different datatype inputs. For example if you have a list that you usually pass to
@pytest.mark.parametrize("varname", varval_list)
you can use that same list in the following way:
# content of conftest.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
argnames = metafunc.cls.scenario_keys
for idx, scenario in enumerate(metafunc.cls.scenario_parameters):
idlist.append(str(idx))
argvalues.append([scenario])
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
# content of test_scenarios.py
varval_list = [a, b, c, d]
class TestSampleWithScenarios(object):
scenario_parameters = varval_list
scenario_keys = ['varname']
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
The id will be an autogenerated number (you can change that to using something you specify) and in this implementation it won't handle multiple parameterization variables so you have to compile those in a single list (or cater pytest_generate_tests
to handle that for you)
回答2:
The following solution does not ask to change your test class
_test_failed_incremental = defaultdict(dict)
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None and call.excinfo.typename != "Skipped":
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
_test_failed_incremental[str(item.cls)].setdefault(param, item.originalname or item.name)
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
param = tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else ()
originalname = _test_failed_incremental[str(item.cls)].get(param)
if originalname:
pytest.xfail("previous test failed ({})".format(originalname))
It works by keeping a dictionary with the failed test per class and per index of parametrized input as key (and the name of the test method that failed as value). In your example, the dictionary _test_failed_incremental will be
defaultdict(<class 'dict'>, {"<class 'test.TestClass'>": {(2,): 'test_input'}})
showing that the 3rd run (index=2) has failed for the class test.TestClass. Before running a test method in the class for a given parameter, it checks if any previous test method in the class has not failed for the given parameter and if so xfail the test with info on the name of the method that first failed.
Not 100% tested but in use and working for my needs.
来源:https://stackoverflow.com/questions/39898702/using-mark-incremental-and-metafunc-parametrize-in-a-pytest-test-class