Let\'s say I am writing a unit test for a function that returns a floating point number, I can do it as such in full precision as per my machine:
>>>
The precision of float in Python is dependent on the underlying C representation. From Tutorial/Floating Point Arithmetic: Issues and Limitations, 15.1:
Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
As for testing, a better idea is to use existing functionality, e.g. TestCase.assertAlmostEqual:
assertAlmostEqual(first, second, places=7, msg=None, delta=None)
Test that first and second are approximately (or not approximately) equal by computing the difference, rounding to the given number of decimal places (default 7), and comparing to zero. If delta is supplied instead of places then the difference between first and second must be less or equal to (or greater than) delta.
Example:
import unittest
def div(x, y): return x / float(y)
class Testdiv(unittest.TestCase):
def testdiv(self):
self.assertAlmostEqual(div(1, 9), 0.1111111111111111)
self.assertAlmostEqual(div(1, 9), 0.1111, places=4)
unittest.main() # OK
If you prefer to stick to assert statement, you could use the math.isclose (Python 3.5+):
import unittest, math
def div(x, y): return x / float(y)
class Testdiv(unittest.TestCase):
def testdiv(self):
assert math.isclose(div(1, 9), 0.1111111111111111)
unittest.main() # OK
The default relative tolerance of math.close is 1e-09, "which assures that the two values are the same within about 9 decimal digits.". For more information about math.close see PEP 485.