Pytest: how to skip the rest of tests in the class if one has failed?

匿名 (未验证) 提交于 2019-12-03 02:20:02

问题:

I'm creating the test cases for web-tests using Jenkins, Python, Selenium2(webdriver) and Py.test frameworks.

So far I'm organizing my tests in the following structure:

each Class is the Test Case and each test_ method is a Test Step.

This setup works GREAT when everything is working fine, however when one step crashes the rest of the "Test Steps" go crazy. I'm able to contain the failure inside the Class (Test Case) with the help of teardown_class(), however I'm looking into how to improve this.

What I need is somehow skip(or xfail) the rest of the test_ methods within one class if one of them has failed, so that the rest of the test cases are not run and marked as FAILED (since that would be false positive)

Thanks!

UPDATE: I'm not looking or the answer "it's bad practice" since calling it that way is very arguable. (each Test Class is independent - and that should be enough).

UPDATE 2: Putting "if" condition in each test method is not an option - is a LOT of repeated work. What I'm looking for is (maybe) somebody knows how to use the hooks to the class methods.

回答1:

I like the general "test-step" idea. I'd term it as "incremental" testing and it makes most sense in functional testing scenarios IMHO.

Here is a an implementation that doesn't depend on internal details of pytest (except for the official hook extensions):

import pytest  def pytest_runtest_makereport(item, call):     if "incremental" in item.keywords:         if call.excinfo is not None:             parent = item.parent             parent._previousfailed = item  def pytest_runtest_setup(item):     previousfailed = getattr(item.parent, "_previousfailed", None)     if previousfailed is not None:         pytest.xfail("previous test failed (%s)" %previousfailed.name) 

If you now have a "test_step.py" like this:

import pytest  @pytest.mark.incremental class TestUserHandling:     def test_login(self):         pass     def test_modification(self):         assert 0     def test_deletion(self):         pass 

then running it looks like this (using -rx to report on xfail reasons):

(1)hpk@t2:~/p/pytest/doc/en/example/teststep$ py.test -rx ============================= test session starts ============================== platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev17 plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout collected 3 items  test_step.py .Fx  =================================== FAILURES =================================== ______________________ TestUserHandling.test_modification ______________________  self =       def test_modification(self): >       assert 0 E       assert 0  test_step.py:8: AssertionError =========================== short test summary info ============================ XFAIL test_step.py::TestUserHandling::()::test_deletion   reason: previous test failed (test_modification) ================ 1 failed, 1 passed, 1 xfailed in 0.02 seconds ================= 

I am using "xfail" here because skips are rather for wrong environments or missing dependencies, wrong interpreter versions.

Edit: Note that neither your example nor my example would directly work with distributed testing. For this, the pytest-xdist plugin needs to grow a way to define groups/classes to be sent whole-sale to one testing slave instead of the current mode which usually sends test functions of a class to different slaves.



回答2:

It's generally bad practice to do what are you doing. Each test should be as independent as possible from the others, while you completely depend on the results of the other tests.

Anyway, reading the docs it seems like a feature like the one you want is not implemented.(Probably because it wasn't considered useful).

A work-around could be to "fail" your tests calling a custom method which sets some condition on the class, and mark each test with the "skipIf" decorator:

class MyTestCase(unittest.TestCase):     skip_all = False     @pytest.mark.skipIf("MyTestCase.skip_all")    def test_A(self):         ...         if failed:             MyTestCase.skip_all = True   @pytest.mark.skipIf("MyTestCase.skip_all")   def test_B(self):       ...       if failed:           MyTestCase.skip_all = True 

Or you can do this control before running each test and eventually call pytest.skip().

edit: Marking as xfail can be done in the same way, but using the corresponding function calls.

Probably, instead of rewriting the boiler-plate code for each test, you could write a decorator(this would probably require that your methods return a "flag" stating if they failed or not).

Anyway, I'd like to point out that,as you state, if one of these tests fails then other failing tests in the same test case should be considered false positive... but you can do this "by hand". Just check the output and spot the false positives. Even though this might be boring./error prone.



回答3:



回答4:

You might want to have a look at pytest-dependency. It is a plugin that allows you to skip some tests if some other test had failed. In your very case, it seems that the incremental tests that gbonetti discussed is more relevant.



回答5:

UPDATE: Please take a look at @hpk42 answer. His answer is less intrusive.

This is what I was actually looking for:

from _pytest.runner import runtestprotocol import pytest from _pytest.mark import MarkInfo  def check_call_report(item, nextitem):     """     if test method fails then mark the rest of the test methods as 'skip'     also if any of the methods is marked as 'pytest.mark.blocker' then     interrupt further testing     """     reports = runtestprotocol(item, nextitem=nextitem)     for report in reports:         if report.when == "call":             if report.outcome == "failed":                 for test_method in item.parent._collected[item.parent._collected.index(item):]:                     test_method._request.applymarker(pytest.mark.skipif("True"))                     if test_method.keywords.has_key('blocker') and isinstance(test_method.keywords.get('blocker'), MarkInfo):                         item.session.shouldstop = "blocker issue has failed or was marked for skipping"             break  def pytest_runtest_protocol(item, nextitem): # add to the hook     item.ihook.pytest_runtest_logstart(         nodeid=item.nodeid, location=item.location,     )     check_call_report(item, nextitem)     return True 

Now adding this to conftest.py or as a plugin solves my problem.
Also it's improved to STOP testing if the blocker test has failed. (meaning that the entire further tests are useless)



回答6:

Or quite simply instead of calling py.test from cmd (or tox or wherever), just call:

py.test --maxfail=1 

see here for more switches: https://pytest.org/latest/usage.html



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!