How can I write an algorithm that given a floating point number, and attempts to represent is as accurately as possible using a numerator and a denominator, both restricted
How worried are you about efficiency? If you're not calling this conversion function 100s of times per second or more, then it probably wouldn't be all that hard to brute-force through every possible denominator (most likely only 255 of them) and find which one gives the closest approximation (computing the numerator to go with the denominator is constant time).