I am looking for a way to run all of my unit tests in PyTest, even if some of them fail. I know there must be a simple way to do this. I checked the CLi options and looked through this site for similar questions/answers but didn't see anything. Sorry if this has already been answered.
For example, consider the following code snippet, with PyTest code alongside it:
def parrot(i):
return i
def test_parrot():
assert parrot(0) == 0
assert parrot(1) == 1
assert parrot(2) == 1
assert parrot(2) == 2
By default, the execution stops at the first failure:
$ python -m pytest fail_me.py
=================== test session starts ===================
platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /home/npsrt/Documents/repo/codewars, inifile:
collected 1 items
fail_me.py F
=================== FAILURES ===================
___________________ test_parrot ___________________
def test_parrot():
assert parrot(0) == 0
assert parrot(1) == 1
> assert parrot(2) == 1
E assert 2 == 1
E + where 2 = parrot(2)
fail_me.py:7: AssertionError
=================== 1 failed in 0.05 seconds ===================
What I'd like to do is to have the code continue to execute even after PyTest encounters the first failure.
It ran all of your tests. You only wrote one test, and that test ran!
If you want nonfatal assertions, where a test will keep going if an assertion fails (like Google Test's EXPECT macros), try pytest-expect, which provides that functionality. Here's the example their site gives:
def test_func(expect):
expect('a' == 'b')
expect(1 != 1)
a = 1
b = 2
expect(a == b, 'a:%s b:%s' % (a,b))
You can see that expectation failures don't stop the test, and all failed expectations get reported:
$ python -m pytest test_expect.py
================ test session starts =================
platform darwin -- Python 2.7.9 -- py-1.4.26 -- pytest-2.7.0
rootdir: /Users/okken/example, inifile:
plugins: expect
collected 1 items
test_expect.py F
====================== FAILURES ======================
_____________________ test_func ______________________
> expect('a' == 'b')
test_expect.py:2
--------
> expect(1 != 1)
test_expect.py:3
--------
> expect(a == b, 'a:%s b:%s' % (a,b))
a:1 b:2
test_expect.py:6
--------
Failed Expectations:3
============== 1 failed in 0.01 seconds ==============
As others already mentioned, you'd ideally write multiple tests and only have one assertion in each (that's not a hard limit, but a good guideline).
The @pytest.mark.parametrize
decorator makes this easy:
import pytest
def parrot(i):
return i
@pytest.mark.parametrize('inp, expected', [(0, 0), (1, 1), (2, 1), (2, 2)])
def test_parrot(inp, expected):
assert parrot(inp) == expected
When running it with -v
:
parrot.py::test_parrot[0-0] PASSED
parrot.py::test_parrot[1-1] PASSED
parrot.py::test_parrot[2-1] FAILED
parrot.py::test_parrot[2-2] PASSED
=================================== FAILURES ===================================
_______________________________ test_parrot[2-1] _______________________________
inp = 2, expected = 1
@pytest.mark.parametrize('inp, expected', [(0, 0), (1, 1), (2, 1), (2, 2)])
def test_parrot(inp, expected):
> assert parrot(inp) == expected
E assert 2 == 1
E + where 2 = parrot(2)
parrot.py:8: AssertionError
====================== 1 failed, 3 passed in 0.01 seconds ======================
You should be able to control this with the --maxfail
argument. I believe the default is to not stop for failures, so I'd check any py.test config files you might have for a place that's overriding it.
The pytest plugin pytest-check is a rewrite of pytest-expect (which was recommended here previously but has gone stale). It will let you do a "soft" assert like so:
An example from the GitHub repo:
import pytest_check as check
def test_example():
a = 1
b = 2
c = [2, 4, 6]
check.greater(a, b)
check.less_equal(b, a)
check.is_in(a, c, "Is 1 in the list")
check.is_not_in(b, c, "make sure 2 isn't in list")
来源:https://stackoverflow.com/questions/36749796/how-to-run-all-pytest-tests-even-if-some-of-them-fail