Internal conversion of integer to char whose size is smaller

一笑奈何 提交于 2021-02-04 21:07:17

问题


In the following program , I am assigning a integer data type to a char data type.

    public static void main(String args[]) {
        char ch =65;
        System.out.println(ch);
    }

I know the fact that int occupies 32 bits and char occupies 16 bits . With that knowledge , I was expecting the compiler throw an error of some message "Attempt to convert a data of higher size to a lower size ".

Why is the compiler not complaining and internally converting and printing the output as 'A' (I understand the fact that it is the ASCII equivalent of 65, my question is only related to the size of data types) ?


回答1:


The compiler does in fact validate the range. That is working because int 65 is within the expected range.

The following won't compile:

char c = (int)Character.MAX_VALUE + 1
char c = 65536

And this will, just like your assignment:

char c = 65535 //Within range

When the value is not a constant at compile time, though, there's need for cast:

private static void charRange(int i) {
    char c = (char) i;
    System.out.println(" --> " + (int) c);
}

charRange(65);
charRange(Character.MAX_VALUE + 20);

And the check doesn't happen (making room for overflow)

--> 65
--> 19




回答2:


There is an exception to Java's general rule about converting an int to a char. If the int is a compile time constant expression (e.g. a literal) AND the int value of the expression is within the required range (0 to 65535), then it is legal to assign the int expression to a char.

Intuitively, for a compile-time constant expression, the compiler knows if the expression value can be assigned without loss of information.

This is covered by JLS 5.2 ... in the paragraph that starts "In addition, if the expression is a constant expression ..."




回答3:


Programing languages like Java or C# come with a set of integer primitive types. Each type has a well-defined range in the form of [min value, max value]. This values are stored in a fixed sequence of bits from most significant bit to the least one.

For example, let the decimal number 123456 be represented for the next 32 bit sequence

0000000000000011110001001000000

When you attempt to convert a 32-bit number type to a 16-bit one, the compiler copies the 16 least significant bits (or the last 16 bits) then 123456 number is wrapped to

1110001001000000

And if you convert this binary number to decimal it is 57920. As you realize, the 32-bit number cant fit into a 16-bit sequence, and the original sequence was arbitrarily wrapped. This is known as integer overflow, and also happens when you add or multiply 2 number which result is out of bounds of the integer type range.

As programmer, you should be aware of overflow, and react to this to avoid a program failure. You also should read further details about signed integer representation.



来源:https://stackoverflow.com/questions/50829613/internal-conversion-of-integer-to-char-whose-size-is-smaller

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!