Why are fundamental types in C and C++ not strictly defined like in Java where an int
is always 4 bytes and long
is 8 bytes, etc. To my knowledge i
Efficiency is part of the answer--for example, if you use C or C++ on a machine that uses 36-bit registers, you don't want to force every operation to include overhead to mask the results so they look/act like 32 bits.
That's really only part of the answer though. The other part is that C and C++ were (and are) intended to be systems programming languages. You're intended to be able to write things like virtual machines and operating systems with them.
That means that if (for example) you're writing code that will interact with the MMU on this 36-bit machine, and you need to set bit 34 of some particular word, the basic intent with C and C++ is that you should be able to do that directly in the language. If the language starts by decreeing that no 36-bit type can exist in the first place, that generally makes it difficult to directly manipulate a 36-bit type in that language.
So, one of the basic premises of C and C++ is that if you need to do something, you should be able to do that something inside the language. The corresponding premise in Java was almost exactly the opposite: that what it allowed you to do should be restricted to those operations it could guarantee would be safe and portable to anything.
In particular, keep in mind that when Sun designed Java one of the big targets they had in mind was applets for web pages. They specifically intended to restrict Java to the point that an end-user could run any Java applet, and feel secure in the knowledge that it couldn't possible harm their machine.
Of course, the situation has changed--the security they aimed for has remained elusive, and applets are essentially dead and gone. Nonetheless, most of the restrictions that were intended to support that model remain in place.
I should probably add that this isn't entirely an either/or type of situation. There are a number of ways of providing some middle ground. Reasonably recent iterations of both the C and C++ standards include types that int32_t
. This guarantees a 32-bit, 2's complement representation, just about like a Java type does. So, if you're running on hardware that actually supports a 32-bit two's complement type, int32_t
will be present and you can use it.
Of course, that's not the only possible way to accomplish roughly the same thing either. Ada, for example, takes a somewhat different route. Instead of the "native" types being oriented toward the machine, and then adding special types with guaranteed properties, it went the other direction, and had native types with guaranteed properties, but also an entire facility for defining a new type that corresponded directly to a target machine. For better or worse, however, Ada has never achieved nearly as wide of usage as C, C++, or Java, and its approach to this particular problem doesn't seem to have been adopted in a lot of other languages either.