I have a class Vector that represents a point in 3-dimensional space. This vector has a method normalize(self, length = 1) which scales the vector down/up
By using a floating point value, you accept a small possible imprecision. Therefore, your tests should test if your computed value falls in an acceptable range such as:
theoreticalValue - epsilon < normalizedValue < theoreticalValue + epsilon
where epsilon is a very small value that you define as acceptable for a variation due to floating point imprecision.