What is the \"correct\" way of comparing a code-point to a Java character? For example:
int codepoint = String.codePointAt(0);
char token = \'\\n\';
<
For characters in the basic multilingual plane, casting the char to an int will get you the codepoint. This corresponds to all the unicode values that can be encoded in a single 16 bit char value. Values outside this plane (with codepoints exceeding 0xffff) cannot be expressed as a single character. This is probably why there is no Character.toCodePoint(char value).