I know, the question seems to be strange. Programmers sometimes think too much. Please read on...
In C I use signed and unsigned integers a
I guess it depends on that the IEEE floating-point specifications only are signed and that most programming languages use them.
Wikipedia article on IEEE-754 floating-point numbers
Edit: Also, as noted by others, most hardware does not support non-negative floats, so the normal kind of floats are more efficient to do since there is hardware support.