string-literals

Weird behaviour constexpr with std::initializer_list

末鹿安然 提交于 2021-02-18 23:00:34
问题 I am trying to understand why the compiler is complaining here: // cexpr_test.cpp #include <initializer_list> constexpr int test_cexpr(std::initializer_list<const char*> x) { return (int) (*x.begin())[0]; // ensuring the value isn't optimized out. } int main() { constexpr int r1 = test_cexpr({ "why does this work," }); constexpr std::initializer_list<const char*> broken { "but this doesn't?" }; constexpr int r2 = test_cexpr(broken); return r1 + r2; } The message produced when compiled with g+

Weird behaviour constexpr with std::initializer_list

大城市里の小女人 提交于 2021-02-18 23:00:13
问题 I am trying to understand why the compiler is complaining here: // cexpr_test.cpp #include <initializer_list> constexpr int test_cexpr(std::initializer_list<const char*> x) { return (int) (*x.begin())[0]; // ensuring the value isn't optimized out. } int main() { constexpr int r1 = test_cexpr({ "why does this work," }); constexpr std::initializer_list<const char*> broken { "but this doesn't?" }; constexpr int r2 = test_cexpr(broken); return r1 + r2; } The message produced when compiled with g+

Javascript literals for characters higher than U+FFFF

怎甘沉沦 提交于 2021-02-10 23:35:37
问题 My javsacript source code is strictly ascii and I want to represent the anger symbol in a string literal. Is that possible in javascript? 回答1: JavaScript strings are effectively UTF-16, so you can write the surrogate pair using Unicode escapes: "\uD83D\uDCA2" (this is what's shown on that page for the Java source code, which also works in JavaScript). As of ES2015 (ES6), you can also write it as \u{1F4A2} rather than working out the surrogate pairs (spec). Example: Using \uD83D\uDCA2 :

Javascript literals for characters higher than U+FFFF

安稳与你 提交于 2021-02-10 23:32:09
问题 My javsacript source code is strictly ascii and I want to represent the anger symbol in a string literal. Is that possible in javascript? 回答1: JavaScript strings are effectively UTF-16, so you can write the surrogate pair using Unicode escapes: "\uD83D\uDCA2" (this is what's shown on that page for the Java source code, which also works in JavaScript). As of ES2015 (ES6), you can also write it as \u{1F4A2} rather than working out the surrogate pairs (spec). Example: Using \uD83D\uDCA2 :

Since a string literal is considered an lvalue, why must the binding lvalue reference be const?

醉酒当歌 提交于 2021-02-07 05:10:51
问题 I know there are topics that are similar to this one already (such as this). The example given in this topic was this: std::string & rs1 = std::string(); Clearly, that std::string() is an rvalue. However, my question is why is s1 legal while s2 is not? const std::string& s1 = "String literal"; std::string& s2 = "String literal"; The standard clearly states that string literals are lvalues (which is understandable since they are technically const char* behind the scenes). When I compile s2

Since a string literal is considered an lvalue, why must the binding lvalue reference be const?

▼魔方 西西 提交于 2021-02-07 05:08:53
问题 I know there are topics that are similar to this one already (such as this). The example given in this topic was this: std::string & rs1 = std::string(); Clearly, that std::string() is an rvalue. However, my question is why is s1 legal while s2 is not? const std::string& s1 = "String literal"; std::string& s2 = "String literal"; The standard clearly states that string literals are lvalues (which is understandable since they are technically const char* behind the scenes). When I compile s2

Since a string literal is considered an lvalue, why must the binding lvalue reference be const?

天大地大妈咪最大 提交于 2021-02-07 05:05:11
问题 I know there are topics that are similar to this one already (such as this). The example given in this topic was this: std::string & rs1 = std::string(); Clearly, that std::string() is an rvalue. However, my question is why is s1 legal while s2 is not? const std::string& s1 = "String literal"; std::string& s2 = "String literal"; The standard clearly states that string literals are lvalues (which is understandable since they are technically const char* behind the scenes). When I compile s2

Since a string literal is considered an lvalue, why must the binding lvalue reference be const?

对着背影说爱祢 提交于 2021-02-07 05:05:02
问题 I know there are topics that are similar to this one already (such as this). The example given in this topic was this: std::string & rs1 = std::string(); Clearly, that std::string() is an rvalue. However, my question is why is s1 legal while s2 is not? const std::string& s1 = "String literal"; std::string& s2 = "String literal"; The standard clearly states that string literals are lvalues (which is understandable since they are technically const char* behind the scenes). When I compile s2

String Constant Pool and intern

流过昼夜 提交于 2021-02-04 13:44:08
问题 I have being trying to understand the concept of String constant pool and inter for last few days, After reading a lot of articles I understood some portions of it, but still confused about few things:- 1. String a = "abc" This creates a object in the String Constant Pool but does the following line of code creates the object "xyz" in String Constant Pool? String b = ("xyz").toLowerCase() 2. String c = "qwe" String d = c.substring(1) d.intern() String e = "we" Should the literal "we" be added

Does array initialization with a string literal cause two memory storage? [duplicate]

早过忘川 提交于 2021-01-28 06:13:17
问题 This question already has answers here : String literals: Where do they go? (8 answers) Closed 7 days ago . int main() { char a[] = "123454321"; } "123454321" is a string literal and a string literal sets aside memory storage. a is defined by the statement which also causes memory storage. That is, this simple statement char a[] = "123454321"; causes two memory storage, one is for a and the other is for "123454321" . Is it right? 回答1: Yes, that's right. Note that the object on the right of