Getting Python's unittest results in a tearDown() method

前端 未结 13 1365
野的像风
野的像风 2020-11-28 22:40

Is it possible to get the results of a test (i.e. whether all assertions have passed) in a tearDown() method? I\'m running Selenium scripts, and I\'d like to do some reporti

13条回答
  •  生来不讨喜
    2020-11-28 22:55

    Inspired by scoffey’s answer, I decided to take mercilessnes to the next level, and have come up with the following.

    It works in both vanilla unittest, and also when run via nosetests, and also works in Python versions 2.7, 3.2, 3.3, and 3.4 (I did not specifically test 3.0, 3.1, or 3.5, as I don’t have these installed at the moment, but if I read the source code correctly, it should work in 3.5 as well):

    #! /usr/bin/env python
    
    from __future__ import unicode_literals
    import logging
    import os
    import sys
    import unittest
    
    
    # Log file to see squawks during testing
    formatter = logging.Formatter(fmt='%(levelname)-8s %(name)s: %(message)s')
    log_file = os.path.splitext(os.path.abspath(__file__))[0] + '.log'
    handler = logging.FileHandler(log_file)
    handler.setFormatter(formatter)
    logging.root.addHandler(handler)
    logging.root.setLevel(logging.DEBUG)
    log = logging.getLogger(__name__)
    
    
    PY = tuple(sys.version_info)[:3]
    
    
    class SmartTestCase(unittest.TestCase):
    
        """Knows its state (pass/fail/error) by the time its tearDown is called."""
    
        def run(self, result):
            # Store the result on the class so tearDown can behave appropriately
            self.result = result.result if hasattr(result, 'result') else result
            if PY >= (3, 4, 0):
                self._feedErrorsToResultEarly = self._feedErrorsToResult
                self._feedErrorsToResult = lambda *args, **kwargs: None  # no-op
            super(SmartTestCase, self).run(result)
    
        @property
        def errored(self):
            if (3, 0, 0) <= PY < (3, 4, 0):
                return bool(self._outcomeForDoCleanups.errors)
            return self.id() in [case.id() for case, _ in self.result.errors]
    
        @property
        def failed(self):
            if (3, 0, 0) <= PY < (3, 4, 0):
                return bool(self._outcomeForDoCleanups.failures)
            return self.id() in [case.id() for case, _ in self.result.failures]
    
        @property
        def passed(self):
            return not (self.errored or self.failed)
    
        def tearDown(self):
            if PY >= (3, 4, 0):
                self._feedErrorsToResultEarly(self.result, self._outcome.errors)
    
    
    class TestClass(SmartTestCase):
    
        def test_1(self):
            self.assertTrue(True)
    
        def test_2(self):
            self.assertFalse(True)
    
        def test_3(self):
            self.assertFalse(False)
    
        def test_4(self):
            self.assertTrue(False)
    
        def test_5(self):
            self.assertHerp('Derp')
    
        def tearDown(self):
            super(TestClass, self).tearDown()
            log.critical('---- RUNNING {} ... -----'.format(self.id()))
            if self.errored:
                log.critical('----- ERRORED -----')
            elif self.failed:
                log.critical('----- FAILED -----')
            else:
                log.critical('----- PASSED -----')
    
    
    if __name__ == '__main__':
        unittest.main()
    

    When run with unittest:

    $ ./test.py -v
    test_1 (__main__.TestClass) ... ok
    test_2 (__main__.TestClass) ... FAIL
    test_3 (__main__.TestClass) ... ok
    test_4 (__main__.TestClass) ... FAIL
    test_5 (__main__.TestClass) ... ERROR
    […]
    
    $ cat ./test.log
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
    CRITICAL __main__: ----- PASSED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
    CRITICAL __main__: ----- FAILED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
    CRITICAL __main__: ----- PASSED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
    CRITICAL __main__: ----- FAILED -----
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
    CRITICAL __main__: ----- ERRORED -----
    

    When run with nosetests:

    $ nosetests ./test.py -v
    test_1 (test.TestClass) ... ok
    test_2 (test.TestClass) ... FAIL
    test_3 (test.TestClass) ... ok
    test_4 (test.TestClass) ... FAIL
    test_5 (test.TestClass) ... ERROR
    
    $ cat ./test.log
    CRITICAL test: ---- RUNNING test.TestClass.test_1 ... -----
    CRITICAL test: ----- PASSED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_2 ... -----
    CRITICAL test: ----- FAILED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_3 ... -----
    CRITICAL test: ----- PASSED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_4 ... -----
    CRITICAL test: ----- FAILED -----
    CRITICAL test: ---- RUNNING test.TestClass.test_5 ... -----
    CRITICAL test: ----- ERRORED -----
    

    Background

    I started with this:

    class SmartTestCase(unittest.TestCase):
    
        """Knows its state (pass/fail/error) by the time its tearDown is called."""
    
        def run(self, result):
            # Store the result on the class so tearDown can behave appropriately
            self.result = result.result if hasattr(result, 'result') else result
            super(SmartTestCase, self).run(result)
    
        @property
        def errored(self):
            return self.id() in [case.id() for case, _ in self.result.errors]
    
        @property
        def failed(self):
            return self.id() in [case.id() for case, _ in self.result.failures]
    
        @property
        def passed(self):
            return not (self.errored or self.failed)
    

    However, this only works in Python 2. In Python 3, up to and including 3.3, the control flow appears to have changed a bit: Python 3’s unittest package processes results after calling each test’s tearDown() method… this behavior can be confirmed if we simply add an extra line (or six) to our test class:

    @@ -63,6 +63,12 @@
                 log.critical('----- FAILED -----')
             else:
                 log.critical('----- PASSED -----')
    +        log.warning(
    +            'ERRORS THUS FAR:\n'
    +            + '\n'.join(tc.id() for tc, _ in self.result.errors))
    +        log.warning(
    +            'FAILURES THUS FAR:\n'
    +            + '\n'.join(tc.id() for tc, _ in self.result.failures))
    
    
     if __name__ == '__main__':
    

    Then just re-run the tests:

    $ python3.3 ./test.py -v
    test_1 (__main__.TestClass) ... ok
    test_2 (__main__.TestClass) ... FAIL
    test_3 (__main__.TestClass) ... ok
    test_4 (__main__.TestClass) ... FAIL
    test_5 (__main__.TestClass) ... ERROR
    […]
    

    …and you will see that you get this as a result:

    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    __main__.TestClass.test_4
    

    Now, compare the above to Python 2’s output:

    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
    CRITICAL __main__: ----- FAILED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
    CRITICAL __main__: ----- PASSED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
    CRITICAL __main__: ----- FAILED -----
    WARNING  __main__: ERRORS THUS FAR:
    
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    __main__.TestClass.test_4
    CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
    CRITICAL __main__: ----- ERRORED -----
    WARNING  __main__: ERRORS THUS FAR:
    __main__.TestClass.test_5
    WARNING  __main__: FAILURES THUS FAR:
    __main__.TestClass.test_2
    __main__.TestClass.test_4
    

    Since Python 3 processes errors/failures after the test is torn down, we can’t readily infer the result of a test using result.errors or result.failures in every case. (I think it probably makes more sense architecturally to process a test’s results after tearing it down, however, it does make the perfectly valid use-case of following a different end-of-test procedure depending on a test’s pass/fail status a bit harder to meet…)

    Therefore, instead of relying on the overall result object, instead we can reference _outcomeForDoCleanups as others have already mentioned, which contains the result object for the currently running test, and has the necessary errors and failrues attributes, which we can use to infer a test’s status by the time tearDown() has been called:

    @@ -3,6 +3,7 @@
     from __future__ import unicode_literals
     import logging
     import os
    +import sys
     import unittest
    
    
    @@ -16,6 +17,9 @@
     log = logging.getLogger(__name__)
    
    
    +PY = tuple(sys.version_info)[:3]
    +
    +
     class SmartTestCase(unittest.TestCase):
    
         """Knows its state (pass/fail/error) by the time its tearDown is called."""
    @@ -27,10 +31,14 @@
    
         @property
         def errored(self):
    +        if PY >= (3, 0, 0):
    +            return bool(self._outcomeForDoCleanups.errors)
             return self.id() in [case.id() for case, _ in self.result.errors]
    
         @property
         def failed(self):
    +        if PY >= (3, 0, 0):
    +            return bool(self._outcomeForDoCleanups.failures)
             return self.id() in [case.id() for case, _ in self.result.failures]
    
         @property
    

    This adds support for the early versions of Python 3.

    As of Python 3.4, however, this private member variable no longer exists, and instead, a new (albeit also private) method was added: _feedErrorsToResult.

    This means that for versions 3.4 (and later), if the need is great enough, one can — very hackishlyforce one’s way in to make it all work again like it did in version 2…

    @@ -27,17 +27,20 @@
         def run(self, result):
             # Store the result on the class so tearDown can behave appropriately
             self.result = result.result if hasattr(result, 'result') else result
    +        if PY >= (3, 4, 0):
    +            self._feedErrorsToResultEarly = self._feedErrorsToResult
    +            self._feedErrorsToResult = lambda *args, **kwargs: None  # no-op
             super(SmartTestCase, self).run(result)
    
         @property
         def errored(self):
    -        if PY >= (3, 0, 0):
    +        if (3, 0, 0) <= PY < (3, 4, 0):
                 return bool(self._outcomeForDoCleanups.errors)
             return self.id() in [case.id() for case, _ in self.result.errors]
    
         @property
         def failed(self):
    -        if PY >= (3, 0, 0):
    +        if (3, 0, 0) <= PY < (3, 4, 0):
                 return bool(self._outcomeForDoCleanups.failures)
             return self.id() in [case.id() for case, _ in self.result.failures]
    
    @@ -45,6 +48,10 @@
         def passed(self):
             return not (self.errored or self.failed)
    
    +    def tearDown(self):
    +        if PY >= (3, 4, 0):
    +            self._feedErrorsToResultEarly(self.result, self._outcome.errors)
    +
    
     class TestClass(SmartTestCase):
    
    @@ -64,6 +71,7 @@
             self.assertHerp('Derp')
    
         def tearDown(self):
    +        super(TestClass, self).tearDown()
             log.critical('---- RUNNING {} ... -----'.format(self.id()))
             if self.errored:
                 log.critical('----- ERRORED -----')
    

    …provided, of course, all consumers of this class remember to super(…, self).tearDown() in their respective tearDown methods…

    Disclaimer: Purely educational, don’t try this at home, etc. etc. etc. I’m not particularly proud of this solution, but it seems to work well enough for the time being, and is the best I could hack up after fiddling for an hour or two on a Saturday afternoon…

提交回复
热议问题