Continuing in Python's unittest when an assertion fails

前端 未结 12 1554
感情败类
感情败类 2020-11-27 14:03

EDIT: switched to a better example, and clarified why this is a real problem.

I\'d like to write unit tests in Python that continue executing when an assertion fails

12条回答
  •  攒了一身酷
    2020-11-27 14:42

    I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.

    I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:

    .FF.
    ======================================================================
    FAIL: test_addition_with_two_negatives (__main__.MathTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_addition.py", line 10, in test_addition_with_two_negatives
        self.assertEqual(-1 + (-1), -1)
    AssertionError: -2 != -1
    
    ======================================================================
    FAIL: test_addition_with_two_positives (__main__.MathTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_addition.py", line 6, in test_addition_with_two_positives
        self.assertEqual(1 + 1, 3)  # Failure!
    AssertionError: 2 != 3
    
    ----------------------------------------------------------------------
    Ran 4 tests in 0.000s
    
    FAILED (failures=2)
    

    If you decide that this approach isn't for you, you may find this answer helpful.

    Update

    It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.

    The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).

    Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.

    FF
    ======================================================================
    FAIL: test_creation_defaults (__main__.CarTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_car.py", line 25, in test_creation_defaults
        self.assertEqual(self.car.wheel_count, 4)  # Failure!
    AssertionError: 3 != 4
    
    ======================================================================
    FAIL: test_creation_parameters (__main__.CarTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_car.py", line 20, in test_creation_parameters
        self.assertEqual(self.car.model, self.model)  # Failure!
    AssertionError: 'Ford' != 'Model T'
    
    ----------------------------------------------------------------------
    Ran 2 tests in 0.000s
    
    FAILED (failures=2)
    

提交回复
热议问题