问题
The spec says that at phase 1 of compilation
Any source file character not in the basic source character set (2.3) is replaced by the universal-character-name that designates that character.
And at phase 4 it says
Preprocessing directives are executed, macro invocations are expanded
At phase 5, we have
Each source character set member in a character literal or a string literal, as well as each escape sequence and universal-character-name in a character literal or a non-raw string literal, is converted to the corresponding member of the execution character set
For the #
operator, we have
a
\
character is inserted before each"
and\
character of a character literal or string literal (including the delimiting"
characters).
Hence I conducted the following test
#define GET_UCN(X) #X
GET_UCN("€")
With an input character set of UTF-8 (matching my file's encoding), I expected the following preprocessing result of the #X
operation: "\"\\u20AC\""
. GCC, Clang and boost.wave don't transform the €
into a UCN and instead yield "\"€\""
. I feel like I'm missing something. Can you please explain?
回答1:
It's simply a bug. §2.1/1 says about Phase 1,
(An implementation may use any internal encoding, so long as an actual extended character encountered in the source file, and the same extended character expressed in the source file as a universal-character-name (i.e. using the \uXXXX notation), are handled equivalently.)
This is not a note or footnote. C++0x adds an exception for raw string literals, which might solve your problem at hand if you have one.
This program clearly demonstrates the malfunction:
#include <iostream>
#define GET_UCN(X) L ## #X
int main() {
std::wcout << GET_UCN("€") << '\n' << GET_UCN("\u20AC") << '\n';
}
http://ideone.com/lb9jc
Because both strings are wide, the first is required to be corrupted into several characters if the compiler fails to interpret the input multibyte sequence. In your given example, total lack of support for UTF-8 could cause the compiler to slavishly echo the sequence right through.
回答2:
"and universal-character-name in a character literal or a non-raw string literal, is converted to the corresponding member of the execution character set"
used to be
"or universal-character-name in character literals and string literals is converted to a member of the execution character set"
Maybe you need a future version of g++.
回答3:
I'm not sure where you got that citation for translation phase 1—the C99 standard says this about translation phase 1 in §5.1.1.2/1:
Physical source file multibyte characters are mapped, in an implementation-defined manner, to the source character set (introducing new-line characters for end-of-line indicators) if necessary. Trigraph sequences are replaced by corresponding single-character internal representations.
So in this case, the Euro character € (represented as the multibyte sequence E2 82 AC in UTF-8) is mapped into the execution character set, which also happens to be UTF-8, so its representation remains the same. It doesn't get converted into a universal character name because, well, there's nothing that says that it should.
回答4:
I suspect you'll find that the euro sign does not satisfy the condition Any source file character not in the basic source character set
so the rest of the text you quote doesn't apply.
Open your test file with your favourite binary editor and check what value is used to represent the euro sign in GET_UCN("€")
来源:https://stackoverflow.com/questions/6463014/why-does-stringizing-an-euro-sign-within-a-string-literal-using-utf8-not-produce