I am running a py.test with a fixture in a conftest file. You can see the code below(this all works fine):
example_test.py
I had a similar problem and I don't know if this is still relevant for you, but I might have found a workaround that would do what you want.
The idea is to extend the MarkEvaluator
class and override the _getglobals
method to force to add fixture values in the global set used by evaluator:
conftest.py
from _pytest.skipping import MarkEvaluator
class ExtendedMarkEvaluator(MarkEvaluator):
def _getglobals(self):
d = super()._getglobals()
d.update(self.item._request._fixture_values)
return d
add a hook to test calls:
def pytest_runtest_call(item):
evalskipif = ExtendedMarkEvaluator(item, "skipif_call")
if evalskipif.istrue():
pytest.skip('[CANNOT RUN]' + evalskipif.getexplanation())
then you can use the marker skipif_call
in your test case:
test_example.py
class Machine():
def __init__(self, state):
self.state = state
@pytest.fixture
def myfixture(request):
return Machine("running")
@pytest.mark.skipif_call('myfixture.state != "running"')
def test_my_fixture_running_success(myfixture):
print(myfixture.state)
myfixture.state = "stopped"
assert True
@pytest.mark.skipif_call('myfixture.state != "running"')
def test_my_fixture_running_fail(myfixture):
print(myfixture.state)
assert False
@pytest.mark.skipif_call('myfixture.state != "stopped"')
def test_my_fixture_stopped_success(myfixture):
print(myfixture.state)
myfixture.state = "running"
@pytest.mark.skipif_call('myfixture.state != "stopped"')
def test_my_fixture_stopped_fail(myfixture):
print(myfixture.state)
assert False
Run
pytest -v --tb=line
============================= test session starts =============================
[...]
collected 4 items
test_example.py::test_my_fixture_running_success PASSED
test_example.py::test_my_fixture_running_fail FAILED
test_example.py::test_my_fixture_stopped_success PASSED
test_example.py::test_my_fixture_stopped_fail FAILED
================================== FAILURES ===================================
C:\test_example.py:21: assert False
C:\test_example.py:31: assert False
===================== 2 failed, 2 passed in 0.16 seconds ======================
Problem
Unfortunately, this works only once for each evaluation expression since MarkEvaluator uses cached eval based on expression as key, so the next time the same expression will be tested, the result will be the cached value.
Solution
The expression is evaluated in the _istrue
method. Unfortunately there is no way to configure the evaluator to avoid caching results.
The only way to avoid caching is to override the _istrue
method to not use the cached_eval function:
class ExtendedMarkEvaluator(MarkEvaluator):
def _getglobals(self):
d = super()._getglobals()
d.update(self.item._request._fixture_values)
return d
def _istrue(self):
if self.holder:
self.result = False
args = self.holder.args
kwargs = self.holder.kwargs
for expr in args:
import _pytest._code
self.expr = expr
d = self._getglobals()
# Non cached eval to reload fixture values
exprcode = _pytest._code.compile(expr, mode="eval")
result = eval(exprcode, d)
if result:
self.result = True
self.reason = expr
self.expr = expr
break
return self.result
return False
Run
pytest -v --tb=line
============================= test session starts =============================
[...]
collected 4 items
test_example.py::test_my_fixture_running_success PASSED
test_example.py::test_my_fixture_running_fail SKIPPED
test_example.py::test_my_fixture_stopped_success PASSED
test_example.py::test_my_fixture_stopped_fail SKIPPED
===================== 2 passed, 2 skipped in 0.10 seconds =====================
Now the tests are skipped because 'myfixture' value has been updated.
Hope it helps.
Cheers
Alex