Is it important to declare a variable as unsigned if you know it should never be negative? Does it help prevent anything other than negative numbers being fed into a function th
Mixing signed and unsigned types can be a major headache. The resulting code will often be bloated, wrong, or both(*). In many cases, unless you need to store values between 2,147,483,648 and 4,294,967,295 within a 32-bit variable, or you need to work with values larger than 9,223,372,036,854,775,807, I'd recommend not bothering with unsigned types at all.
(*)What should happen, for example, if a programmer does:
{ Question would be applicable to C, Pascal, Basic, or any other language } If SignedVar + UnsignedVar > OtherSignedVar Then DoSomething;
I believe Borland's old Pascal would handle the above scenario by converting SignedVar and UnsignedVar to a larger signed type (the largest supported type, btw, was signed, so every unsigned type could be converted to a larger signed one). This would produce big code, but it would be correct. In C, if one signed variable is negative the result is likely to be numerically wrong even if UnsignedVar holds zero. Many other bad scenarios exist as well.