I seem to be losing a lot of precision with floats.
For example I need to solve a matrix:
4.0x -2.0y 1.0z =11.0
1.0x +5.0y -3.0z =-6.0
2.0x +2.0y +5.
First, your input can be simplified a lot. You don't need to read and parse a file. You can just declare your objects in Python notation. Eval the file.
b = [
[4.0, -2.0, 1.0],
[1.0, +5.0, -3.0],
[2.0, +2.0, +5.0],
]
y = [ 11.0, -6.0, 7.0 ]
Second, y=-1.2-0.20000000000000001x+0.59999999999999998z isn't unusual. There's no exact representation in binary notation for 0.2 or 0.6. Consequently, the values displayed are the decimal approximations of the original not exact representations. Those are true for just about every kind of floating-point processor there is.
You can try the Python 2.6 fractions module. There's an older rational package that might help.
Yes, raising floating-point numbers to powers increases the errors. Consequently, you have to be sure to avoid using the right-most positions of the floating-point number, since those bits are mostly noise.
When displaying floating-point numbers, you have to appropriately round them to avoid seeing the noise bits.
>>> a
0.20000000000000001
>>> "%.4f" % (a,)
'0.2000'