How does Java 16 bit chars support Unicode?

后端 未结 3 1540
慢半拍i
慢半拍i 2020-12-19 03:13

Javas char is 16 bit, yet Unicode have far more characters - how does Java deal with that ?

相关标签:
3条回答
  • 2020-12-19 03:24

    http://en.wikipedia.org/wiki/UTF-16

    In computing, UTF-16 (16-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode, capable of encoding the entire Unicode repertoire. The encoding form maps each character to a sequence of 16-bit words. Characters are known as code points and the 16-bit words are known as code units. For characters in the Basic Multilingual Plane (BMP) the resulting encoding is a single 16-bit word. For characters in the other planes, the encoding will result in a pair of 16-bit words, together called a surrogate pair. All possible code points from U+0000 through U+10FFFF, except for the surrogate code points U+D800–U+DFFF (which are not characters), are uniquely mapped by UTF-16 regardless of the code point's current or future character assignment or use.

    0 讨论(0)
  • 2020-12-19 03:31

    Java Strings are UTF-16 (big endian), so a Unicode code point can be one or two characters. Under this encoding, Java can represent the code point U+1D50A (MATHEMATICAL FRAKTUR CAPITAL G) using the chars 0xD835 0xDD0A (String literal "\uD835\uDD0A"). The Character class provides methods for converting to/from code points.

    // Unicode code point to char array
    char[] math_fraktur_cap_g = Character.toChars(0x1D50A);
    
    0 讨论(0)
  • 2020-12-19 03:38

    Java uses UTF-16 for strings - basically means that characters are variable width. Most of them fit in 16 bits, but those outside Basic Multilingual Pane occupy 32 bits. It's very similar to UTF-8 scheme.

    0 讨论(0)
提交回复
热议问题