signed

Signedness of enum in C/C99/C++/C++x/GNU C/GNU C99

女生的网名这么多〃 提交于 2019-11-27 08:34:28
Is enum type signed or unsigned? Is the Signedness of enums differ in C/C99/ANSI C/C++/C++x/GNU C/ GNU C99? Thanks An enum is guaranteed to be represented by an integer, but the actual type (and its signedness) is implementation-dependent. You can force an enumeration to be represented by a signed type by giving one of the enumerators a negative value: enum SignedEnum { a = -1 }; In C++0x, the underlying type of an enumeration can be explicitly specified: enum ShortEnum : short { a }; (C++0x also adds support for scoped enumerations) For completeness, I'll add that in The C Programming

How do I convert hex string into signed integer?

馋奶兔 提交于 2019-11-27 07:41:25
问题 I'm getting a hex string that needs to be converted to a signed 8-bit integer. Currently I'm converting using Int16/Int32, which will obviously not give me a negative value for an 8-bit integer. If I get the value 255 in Hex, how do I convert that to -1 in decimal? I assume I want to use an sbyte, but I'm not sure how to get that value in there properly. 回答1: You can use Convert.ToSByte For example: string x = "aa"; sbyte v = Convert.ToSByte(x, 16); // result: v = 0xAA or -86 You can also use

Converting hexadecimal numbers in strings to negative numbers, in Perl

北战南征 提交于 2019-11-27 07:38:21
问题 I have a bunch of numbers represented as hexadecimal strings in log files that are being parsed by a Perl script, and I'm relatively inexperienced with Perl. Some of these numbers are actually signed negative numbers, i.e. 0xFFFE == -2 when represented as a 16-bit signed integer. Can somebody please tell me the canonical way of getting the signed representation of this number from the string FFFE in Perl, or otherwise point me to a tutorial or other resource? 回答1: You can use the hex()

How to convert signed to unsigned integer in python

限于喜欢 提交于 2019-11-27 06:47:32
Let's say I have this number i = -6884376 . How do I refer to it as to an unsigned variable? Something like (unsigned long)i in C. Assuming : You have 2's-complement representations in mind; and, By (unsigned long) you mean unsigned 32-bit integer, then you just need to add 2**32 (or 1 << 32) to the negative value. For example, apply this to -1: >>> -1 -1 >>> _ + 2**32 4294967295L >>> bin(_) '0b11111111111111111111111111111111' Assumption #1 means you want -1 to be viewed as a solid string of 1 bits, and assumption #2 means you want 32 of them. Nobody but you can say what your hidden

How to get the signed integer value of a long in python?

断了今生、忘了曾经 提交于 2019-11-27 05:43:21
问题 If lv stores a long value, and the machine is 32 bits, the following code: iv = int(lv & 0xffffffff) results an iv of type long, instead of the machine's int. How can I get the (signed) int value in this case? 回答1: import ctypes number = lv & 0xFFFFFFFF signed_number = ctypes.c_long(number).value 回答2: You're working in a high-level scripting language; by nature, the native data types of the system you're running on aren't visible. You can't cast to a native signed int with code like this. If

Programmatically determining max value of a signed integer type

£可爱£侵袭症+ 提交于 2019-11-27 02:22:18
问题 This related question is about determining the max value of a signed type at compile-time: C question: off_t (and other signed integer types) minimum and maximum values However, I've since realized that determining the max value of a signed type (e.g. time_t or off_t ) at runtime seems to be a very difficult task. The closest thing to a solution I can think of is: uintmax_t x = (uintmax_t)1<<CHAR_BIT*sizeof(type)-2; while ((type)x<=0) x>>=1; This avoids any looping as long as type has no

Can a pointer (address) ever be negative?

戏子无情 提交于 2019-11-27 01:31:51
I have a function that I would like to be able to return special values for failure and uninitialized (it returns a pointer on success). Currently it returns NULL for failure, and -1 for uninitialized, and this seems to work... but I could be cheating the system. IIRC, addresses are always positive, are they not? (although since the compiler is allowing me to set an address to -1, this seems strange). [update] Another idea I had (in the event that -1 was risky) is to malloc a char @ the global scope, and use that address as a sentinel. No, addresses aren't always positive - on x86_64, pointers

Unsigned hexadecimal constant in C?

我与影子孤独终老i 提交于 2019-11-27 01:25:14
Does C treat hexadecimal constants (e.g. 0x23FE) and signed or unsigned int? The number itself is always interpreted as a non-negative number. Hexadecimal constants don't have a sign or any inherent way to express a negative number. The type of the constant is the first one of these which can represent their value: int unsigned int long int unsigned long int long long int unsigned long long int It treats them as int literals(basically, as signed int!). To write an unsigned literal just add u at the end: 0x23FEu According to cppreference , the type of the hexadecimal literal is the first type

Verifying that C / C++ signed right shift is arithmetic for a particular compiler?

假装没事ソ 提交于 2019-11-27 01:21:32
问题 According to the C / C++ standard (see this link), the >> operator in C and C++ is not necessarily an arithmetic shift for signed numbers. It is up to the compiler implementation whether 0's (logical) or the sign bit (arithmetic) are shifted in as bits are shifted to the right. Will this code function to ASSERT (fail) at compile time for compilers that implement a logical right shift for signed integers ? #define COMPILE_TIME_ASSERT(EXP) \ typedef int CompileTimeAssertType##__LINE__[(EXP) ? 1

Signed vs. unsigned integers for lengths/counts

人盡茶涼 提交于 2019-11-26 22:50:40
问题 For representing a length or count variable, is it better to use signed or unsigned integers? It seems to me that C++ STL tends to prefer unsigned ( std::size_t , like in std::vector::size(), instead C# BCL tends to prefer signed integers (like in ICollection.Count. Considering that a length or a count are non-negative integers, my intuition would choose unsigned ; but I fail to understand why the .NET designers chose signed integers. What is the best approach? What are the pros and cons of