If I\'m writing unit tests in python (using the unittest module), is it possible to output data from a failed test, so I can examine it to help deduce what caused the error?
I think I might have been overthinking this. One way I've come up with that does the job, is simply to have a global variable, that accumulates the diagnostic data.
Somthing like this:
log1 = dict()
class TestBar(unittest.TestCase):
def runTest(self):
for t1, t2 in testdata:
f = Foo(t1)
if f.bar(t2) != 2:
log1("TestBar.runTest") = (f, t1, t2)
self.fail("f.bar(t2) != 2")
Thanks for the replies. They have given me some alternative ideas for how to record information from unit tests in python.
Very late answer for someone that, like me, comes here looking for a simple and quick answer.
In Python 2.7 you could use an additional parameter msg
to add information to the error message like this:
self.assertEqual(f.bar(t2), 2, msg='{0}, {1}'.format(t1, t2))
Offical docs here
What I do in these cases is to have a log.debug()
with some messages in my application. Since the default logging level is WARNING
, such messages don't show in the normal execution.
Then, in the unittest I change the logging level to DEBUG
, so that such messages are shown while running them.
import logging
log.debug("Some messages to be shown just when debugging or unittesting")
In the unittests:
# Set log level
loglevel = logging.DEBUG
logging.basicConfig(level=loglevel)
See a full example:
This is daikiri.py
, a basic class that implements a Daikiri with its name and price. There is method make_discount()
that returns the price of that specific daikiri after applying a given discount:
import logging
log = logging.getLogger(__name__)
class Daikiri(object):
def __init__(self, name, price):
self.name = name
self.price = price
def make_discount(self, percentage):
log.debug("Deducting discount...") # I want to see this message
return self.price * percentage
Then, I create a unittest test_daikiri.py
that checks its usage:
import unittest
import logging
from .daikiri import Daikiri
class TestDaikiri(unittest.TestCase):
def setUp(self):
# Changing log level to DEBUG
loglevel = logging.DEBUG
logging.basicConfig(level=loglevel)
self.mydaikiri = Daikiri("cuban", 25)
def test_drop_price(self):
new_price = self.mydaikiri.make_discount(0)
self.assertEqual(new_price, 0)
if __name__ == "__main__":
unittest.main()
So when I execute it I get the log.debug
messages:
$ python -m test_daikiri
DEBUG:daikiri:Deducting discount...
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
inspect.trace will let you get local variables after an exception has been thrown. You can then wrap the unit tests with a decorator like the following one to save off those local variables for examination during the post mortem.
import random
import unittest
import inspect
def store_result(f):
"""
Store the results of a test
On success, store the return value.
On failure, store the local variables where the exception was thrown.
"""
def wrapped(self):
if 'results' not in self.__dict__:
self.results = {}
# If a test throws an exception, store local variables in results:
try:
result = f(self)
except Exception as e:
self.results[f.__name__] = {'success':False, 'locals':inspect.trace()[-1][0].f_locals}
raise e
self.results[f.__name__] = {'success':True, 'result':result}
return result
return wrapped
def suite_results(suite):
"""
Get all the results from a test suite
"""
ans = {}
for test in suite:
if 'results' in test.__dict__:
ans.update(test.results)
return ans
# Example:
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
@store_result
def test_shuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
# should raise an exception for an immutable sequence
self.assertRaises(TypeError, random.shuffle, (1,2,3))
return {1:2}
@store_result
def test_choice(self):
element = random.choice(self.seq)
self.assertTrue(element in self.seq)
return {7:2}
@store_result
def test_sample(self):
x = 799
with self.assertRaises(ValueError):
random.sample(self.seq, 20)
for element in random.sample(self.seq, 5):
self.assertTrue(element in self.seq)
return {1:99999}
suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)
from pprint import pprint
pprint(suite_results(suite))
The last line will print the returned values where the test succeeded and the local variables, in this case x, when it fails:
{'test_choice': {'result': {7: 2}, 'success': True},
'test_sample': {'locals': {'self': <__main__.TestSequenceFunctions testMethod=test_sample>,
'x': 799},
'success': False},
'test_shuffle': {'result': {1: 2}, 'success': True}}
Har det gøy :-)
Admitting that I haven't tried it, the testfixtures' logging feature looks quite useful...
I don't think this is quite what your looking for, there's no way to display variable values that don't fail, but this may help you get closer to outputting the results the way you want.
You can use the TestResult object returned by the TestRunner.run() for results analysis and processing. Particularly, TestResult.errors and TestResult.failures
About the TestResults Object:
http://docs.python.org/library/unittest.html#id3
And some code to point you in the right direction:
>>> import random
>>> import unittest
>>>
>>> class TestSequenceFunctions(unittest.TestCase):
... def setUp(self):
... self.seq = range(5)
... def testshuffle(self):
... # make sure the shuffled sequence does not lose any elements
... random.shuffle(self.seq)
... self.seq.sort()
... self.assertEqual(self.seq, range(10))
... def testchoice(self):
... element = random.choice(self.seq)
... error_test = 1/0
... self.assert_(element in self.seq)
... def testsample(self):
... self.assertRaises(ValueError, random.sample, self.seq, 20)
... for element in random.sample(self.seq, 5):
... self.assert_(element in self.seq)
...
>>> suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
>>> testResult = unittest.TextTestRunner(verbosity=2).run(suite)
testchoice (__main__.TestSequenceFunctions) ... ERROR
testsample (__main__.TestSequenceFunctions) ... ok
testshuffle (__main__.TestSequenceFunctions) ... FAIL
======================================================================
ERROR: testchoice (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<stdin>", line 11, in testchoice
ZeroDivisionError: integer division or modulo by zero
======================================================================
FAIL: testshuffle (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<stdin>", line 8, in testshuffle
AssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
----------------------------------------------------------------------
Ran 3 tests in 0.031s
FAILED (failures=1, errors=1)
>>>
>>> testResult.errors
[(<__main__.TestSequenceFunctions testMethod=testchoice>, 'Traceback (most recent call last):\n File "<stdin>"
, line 11, in testchoice\nZeroDivisionError: integer division or modulo by zero\n')]
>>>
>>> testResult.failures
[(<__main__.TestSequenceFunctions testMethod=testshuffle>, 'Traceback (most recent call last):\n File "<stdin>
", line 8, in testshuffle\nAssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n')]
>>>