Getting Python's unittest results in a tearDown() method

前端 未结 13 1330
野的像风
野的像风 2020-11-28 22:40

Is it possible to get the results of a test (i.e. whether all assertions have passed) in a tearDown() method? I\'m running Selenium scripts, and I\'d like to do some reporti

相关标签:
13条回答
  • 2020-11-28 22:47

    Following on from amatellanes' answer, if you're on Python3.4, you can't use _outcomeForDoCleanups. Here's what I managed to hack together:

    def _test_has_failed(self):
        for method, error in self._outcome.errors:
            if error:
                return True
        return False
    

    yucky, but it seems to work.

    0 讨论(0)
  • 2020-11-28 22:52

    Tested for python 3.7 - sample code for getting info of failing assertion but can give idea how to deal with errors:

    def tearDown(self):
        if self._outcome.errors[1][1] and hasattr(self._outcome.errors[1][1][1], 'actual'):
            print(self._testMethodName)
            print(self._outcome.errors[1][1][1].actual)
            print(self._outcome.errors[1][1][1].expected)
    
    0 讨论(0)
  • 2020-11-28 22:54

    If you are using Python2 you can use the method _resultForDoCleanups. This method return a TextTestResult object:

    <unittest.runner.TextTestResult run=1 errors=0 failures=0>

    You can use this object to check the result of your tests:

    def tearDown(self):
        if self._resultForDoCleanups.failures:
            ...
        elif self._resultForDoCleanups.errors:
            ...
        else:
            #Success
    

    If you are using Python3 you can use _outcomeForDoCleanups:

    def tearDown(self):
        if not self._outcomeForDoCleanups.success:
            ...
    
    0 讨论(0)
  • 2020-11-28 22:55

    It depends what kind of reporting you'd like to produce.

    In case you'd like to do some actions on failure (such as generating a screenshots), instead of using tearDown(), you may achieve that by overriding failureException.

    For example:

    @property
    def failureException(self):
        class MyFailureException(AssertionError):
            def __init__(self_, *args, **kwargs):
                screenshot_dir = 'reports/screenshots'
                if not os.path.exists(screenshot_dir):
                    os.makedirs(screenshot_dir)
                self.driver.save_screenshot('{0}/{1}.png'.format(screenshot_dir, self.id()))
                return super(MyFailureException, self_).__init__(*args, **kwargs)
        MyFailureException.__name__ = AssertionError.__name__
        return MyFailureException
    
    0 讨论(0)
  • 2020-11-28 22:55

    Inspired by scoffey’s answer, I decided to take mercilessnes to the next level, and have come up with the following.

    It works in both vanilla unittest, and also when run via nosetests, and also works in Python versions 2.7, 3.2, 3.3, and 3.4 (I did not specifically test 3.0, 3.1, or 3.5, as I don’t have these installed at the moment, but if I read the source code correctly, it should work in 3.5 as well):

    #! /usr/bin/env python
    
    from __future__ import unicode_literals
    import logging
    import os
    import sys
    import unittest
    
    
    # Log file to see squawks during testing
    formatter = logging.Formatter(fmt='%(levelname)-8s %(name)s: %(message)s')
    log_file = os.path.splitext(os.path.abspath(__file__))[0] + '.log'
    handler = logging.FileHandler(log_file)
    handler.setFormatter(formatter)
    logging.root.addHandler(handler)
    logging.root.setLevel(logging.DEBUG)
    log = logging.getLogger(__name__)
    
    
    PY = tuple(sys.version_info)[:3]
    
    
    class SmartTestCase(unittest.TestCase):
    
        """Knows its state (pass/fail/error) by the time its tearDown is called."""
    
        def run(self, result):
            # Store the result on the class so tearDown can behave appropriately
            self.result = result.result if hasattr(result, 'result') else result
            if PY >= (3, 4, 0):
                self._feedErrorsToResultEarly = self._feedErrorsToResult
                self._feedErrorsToResult = lambda *args, **kwargs: None  # no-op
            super(SmartTestCase, self).run(result)
    
        @property
        def errored(self):
            if (3, 0, 0) <= PY < (3, 4, 0):
                return bool(self._outcomeForDoCleanups.errors)
            return self.id() in [case.id() for case, _ in self.result.errors]
    
        @property
        def failed(self):
            if (3, 0, 0) <= PY < (3, 4, 0):
                return bool(self._outcomeForDoCleanups.failures)
            return self.id() in [case.id() for case, _ in self.result.failures]
    
        @property
        def passed(self):
            return not (self.errored or self.failed)
    
        def tearDown(self):
            if PY >= (3, 4, 0):
                self._feedErrorsToResultEarly(self.result, self._outcome.errors)
    
    
    class TestClass(SmartTestCase):
    
        def test_1(self):
            self.assertTrue(True)
    
        def test_2(self):
            self.assertFalse(True)
    
        def test_3(self):
            self.assertFalse(False)
    
        def test_4(self):
            self.assertTrue(False)
    
        def test_5(self):
            self.assertHerp('Derp')
    
        def tearDown(self):
            super(TestClass, self).tearDown()
            log.critical('---- RUNNING {} ... -----'.format(self.id()))
            if self.errored:
                log.critical('----- ERRORED -----')
            elif self.failed:
                log.critical('----- FAILED -----')
            else:
                log.critical('----- PASSED -----')
    
    
    if __name__ == '__main__':
        unittest.main()
    

    When run with unittest:

    $ ./test.py -v
    test_1 (__main__.TestClass) ... ok
    test_2 (__main__.TestClass) ... FAIL
    test_3 (__main__.TestClass) ... ok
    test_4 (__main__.TestClass) ... FAIL
    test_5 (__main__.TestClass) ... ERROR
    […]
    
    $ cat ./test.log
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
    CRITICAL __main__: ----- PASSED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
    CRITICAL __main__: ----- FAILED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
    CRITICAL __main__: ----- PASSED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
    CRITICAL __main__: ----- FAILED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
    CRITICAL __main__: ----- ERRORED -----
    

    When run with nosetests:

    $ nosetests ./test.py -v
    test_1 (test.TestClass) ... ok
    test_2 (test.TestClass) ... FAIL
    test_3 (test.TestClass) ... ok
    test_4 (test.TestClass) ... FAIL
    test_5 (test.TestClass) ... ERROR
    
    $ cat ./test.log
    CRITICAL test: ---- RUNNING test.TestClass.test_1 ... -----
    CRITICAL test: ----- PASSED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_2 ... -----
    CRITICAL test: ----- FAILED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_3 ... -----
    CRITICAL test: ----- PASSED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_4 ... -----
    CRITICAL test: ----- FAILED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_5 ... -----
    CRITICAL test: ----- ERRORED -----
    

    Background

    I started with this:

    class SmartTestCase(unittest.TestCase):
    
        """Knows its state (pass/fail/error) by the time its tearDown is called."""
    
        def run(self, result):
            # Store the result on the class so tearDown can behave appropriately
            self.result = result.result if hasattr(result, 'result') else result
            super(SmartTestCase, self).run(result)
    
        @property
        def errored(self):
            return self.id() in [case.id() for case, _ in self.result.errors]
    
        @property
        def failed(self):
            return self.id() in [case.id() for case, _ in self.result.failures]
    
        @property
        def passed(self):
            return not (self.errored or self.failed)
    

    However, this only works in Python 2. In Python 3, up to and including 3.3, the control flow appears to have changed a bit: Python 3’s unittest package processes results after calling each test’s tearDown() method… this behavior can be confirmed if we simply add an extra line (or six) to our test class:

    @@ -63,6 +63,12 @@
                 log.critical('----- FAILED -----')
             else:
                 log.critical('----- PASSED -----')
    +        log.warning(
    +            'ERRORS THUS FAR:\n'
    +            + '\n'.join(tc.id() for tc, _ in self.result.errors))
    +        log.warning(
    +            'FAILURES THUS FAR:\n'
    +            + '\n'.join(tc.id() for tc, _ in self.result.failures))
    
    
     if __name__ == '__main__':
    

    Then just re-run the tests:

    $ python3.3 ./test.py -v
    test_1 (__main__.TestClass) ... ok
    test_2 (__main__.TestClass) ... FAIL
    test_3 (__main__.TestClass) ... ok
    test_4 (__main__.TestClass) ... FAIL
    test_5 (__main__.TestClass) ... ERROR
    […]
    

    …and you will see that you get this as a result:

    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    __main__.TestClass.test_4
    

    Now, compare the above to Python 2’s output:

    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
    CRITICAL __main__: ----- FAILED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
    CRITICAL __main__: ----- FAILED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    __main__.TestClass.test_4
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
    CRITICAL __main__: ----- ERRORED -----
    WARNING  __main__: ERRORS THUS FAR:
    __main__.TestClass.test_5
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    __main__.TestClass.test_4
    

    Since Python 3 processes errors/failures after the test is torn down, we can’t readily infer the result of a test using result.errors or result.failures in every case. (I think it probably makes more sense architecturally to process a test’s results after tearing it down, however, it does make the perfectly valid use-case of following a different end-of-test procedure depending on a test’s pass/fail status a bit harder to meet…)

    Therefore, instead of relying on the overall result object, instead we can reference _outcomeForDoCleanups as others have already mentioned, which contains the result object for the currently running test, and has the necessary errors and failrues attributes, which we can use to infer a test’s status by the time tearDown() has been called:

    @@ -3,6 +3,7 @@
     from __future__ import unicode_literals
     import logging
     import os
    +import sys
     import unittest
    
    
    @@ -16,6 +17,9 @@
     log = logging.getLogger(__name__)
    
    
    +PY = tuple(sys.version_info)[:3]
    +
    +
     class SmartTestCase(unittest.TestCase):
    
         """Knows its state (pass/fail/error) by the time its tearDown is called."""
    @@ -27,10 +31,14 @@
    
         @property
         def errored(self):
    +        if PY >= (3, 0, 0):
    +            return bool(self._outcomeForDoCleanups.errors)
             return self.id() in [case.id() for case, _ in self.result.errors]
    
         @property
         def failed(self):
    +        if PY >= (3, 0, 0):
    +            return bool(self._outcomeForDoCleanups.failures)
             return self.id() in [case.id() for case, _ in self.result.failures]
    
         @property
    

    This adds support for the early versions of Python 3.

    As of Python 3.4, however, this private member variable no longer exists, and instead, a new (albeit also private) method was added: _feedErrorsToResult.

    This means that for versions 3.4 (and later), if the need is great enough, one can — very hackishlyforce one’s way in to make it all work again like it did in version 2…

    @@ -27,17 +27,20 @@
         def run(self, result):
             # Store the result on the class so tearDown can behave appropriately
             self.result = result.result if hasattr(result, 'result') else result
    +        if PY >= (3, 4, 0):
    +            self._feedErrorsToResultEarly = self._feedErrorsToResult
    +            self._feedErrorsToResult = lambda *args, **kwargs: None  # no-op
             super(SmartTestCase, self).run(result)
    
         @property
         def errored(self):
    -        if PY >= (3, 0, 0):
    +        if (3, 0, 0) <= PY < (3, 4, 0):
                 return bool(self._outcomeForDoCleanups.errors)
             return self.id() in [case.id() for case, _ in self.result.errors]
    
         @property
         def failed(self):
    -        if PY >= (3, 0, 0):
    +        if (3, 0, 0) <= PY < (3, 4, 0):
                 return bool(self._outcomeForDoCleanups.failures)
             return self.id() in [case.id() for case, _ in self.result.failures]
    
    @@ -45,6 +48,10 @@
         def passed(self):
             return not (self.errored or self.failed)
    
    +    def tearDown(self):
    +        if PY >= (3, 4, 0):
    +            self._feedErrorsToResultEarly(self.result, self._outcome.errors)
    +
    
     class TestClass(SmartTestCase):
    
    @@ -64,6 +71,7 @@
             self.assertHerp('Derp')
    
         def tearDown(self):
    +        super(TestClass, self).tearDown()
             log.critical('---- RUNNING {} ... -----'.format(self.id()))
             if self.errored:
                 log.critical('----- ERRORED -----')
    

    …provided, of course, all consumers of this class remember to super(…, self).tearDown() in their respective tearDown methods…

    Disclaimer: Purely educational, don’t try this at home, etc. etc. etc. I’m not particularly proud of this solution, but it seems to work well enough for the time being, and is the best I could hack up after fiddling for an hour or two on a Saturday afternoon…

    0 讨论(0)
  • 2020-11-28 23:01

    Here's a solution for those of us who are uncomfortable using solutions that rely on unittest internals:

    First, we create a decorator that will set a flag on the TestCase instance to determine whether or not the test case failed or passed:

    import unittest
    import functools
    
    def _tag_error(func):
        """Decorates a unittest test function to add failure information to the TestCase."""
    
        @functools.wraps(func)
        def decorator(self, *args, **kwargs):
            """Add failure information to `self` when `func` raises an exception."""
            self.test_failed = False
            try:
                func(self, *args, **kwargs)
            except unittest.SkipTest:
                raise
            except Exception:  # pylint: disable=broad-except
                self.test_failed = True
                raise  # re-raise the error with the original traceback.
    
        return decorator
    

    This decorator is actually pretty simple. It relies on the fact that unittest detects failed tests via Exceptions. As far as I'm aware, the only special exception that needs to be handled is unittest.SkipTest (which does not indicate a test failure). All other exceptions indicate test failures so we mark them as such when they bubble up to us.

    We can now use this decorator directly:

    class MyTest(unittest.TestCase):
        test_failed = False
    
        def tearDown(self):
            super(MyTest, self).tearDown()
            print(self.test_failed)
    
        @_tag_error
        def test_something(self):
            self.fail('Bummer')
    

    It's going to get really annoying writing this decorator all the time. Is there a way we can simplify? Yes there is!* We can write a metaclass to handle applying the decorator for us:

    class _TestFailedMeta(type):
        """Metaclass to decorate test methods to append error information to the TestCase instance."""
        def __new__(cls, name, bases, dct):
            for name, prop in dct.items():
                # assume that TestLoader.testMethodPrefix hasn't been messed with -- otherwise, we're hosed.
                if name.startswith('test') and callable(prop):
                    dct[name] = _tag_error(prop)
    
            return super(_TestFailedMeta, cls).__new__(cls, name, bases, dct)
    

    Now we apply this to our base TestCase subclass and we're all set:

    import six  # For python2.x/3.x compatibility
    
    class BaseTestCase(six.with_metaclass(_TestFailedMeta, unittest.TestCase)):
        """Base class for all our other tests.
    
        We don't really need this, but it demonstrates that the
        metaclass gets applied to all subclasses too.
        """
    
    
    class MyTest(BaseTestCase):
    
        def tearDown(self):
            super(MyTest, self).tearDown()
            print(self.test_failed)
    
        def test_something(self):
            self.fail('Bummer')
    

    There are likely a number of cases that this doesn't handle properly. For example, it does not correctly detect failed subtests or expected failures. I'd be interested in other failure modes of this, so if you find a case that I'm not handling properly, let me know in the comments and I'll look into it.


    *If there wasn't an easier way, I wouldn't have made _tag_error a private function ;-)

    0 讨论(0)
提交回复
热议问题