numeric-conversion

Is it more efficient to perform a range check by casting to uint instead of checking for negative values?

老子叫甜甜 提交于 2019-11-27 11:50:27
问题 I stumbled upon this piece of code in .NET's List source code: // Following trick can reduce the range check by one if ((uint) index >= (uint)_size) { ThrowHelper.ThrowArgumentOutOfRangeException(); } Apparently this is more efficient (?) than if (index < 0 || index >= _size) I am curious about the rationale behind the trick. Is a single branch instruction really more expensive than two conversions to uint ? Or is there some other optimization going on that will make this code faster than an

How does testing if a string is 'greater' than another work in Bash?

穿精又带淫゛_ 提交于 2019-11-27 07:34:14
问题 In Bash I can write the following test [[ "f" > "a" ]] which results in returning 0, i.e. true. How does bash actually perform this string comparison? From my understanding > does an integer comparison. Does it try to compare the ASCII value of the operands? 回答1: From help test : STRING1 > STRING2 True if STRING1 sorts after STRING2 lexicographically. Internally, bash either uses strcoll() or strcmp() for that: else if ((op[0] == '>' || op[0] == '<') && op[1] == '\0') { if (shell

Why is 0 < -0x80000000?

≡放荡痞女 提交于 2019-11-26 18:08:48
I have below a simple program: #include <stdio.h> #define INT32_MIN (-0x80000000) int main(void) { long long bal = 0; if(bal < INT32_MIN ) { printf("Failed!!!"); } else { printf("Success!!!"); } return 0; } The condition if(bal < INT32_MIN ) is always true. How is it possible? It works fine if I change the macro to: #define INT32_MIN (-2147483648L) Can anyone point out the issue? Lundin This is quite subtle. Every integer literal in your program has a type. Which type it has is regulated by a table in 6.4.4.1: Suffix Decimal Constant Octal or Hexadecimal Constant none int int long int unsigned

Why is 0 < -0x80000000?

风流意气都作罢 提交于 2019-11-26 06:11:36
问题 I have below a simple program: #include <stdio.h> #define INT32_MIN (-0x80000000) int main(void) { long long bal = 0; if(bal < INT32_MIN ) { printf(\"Failed!!!\"); } else { printf(\"Success!!!\"); } return 0; } The condition if(bal < INT32_MIN ) is always true. How is it possible? It works fine if I change the macro to: #define INT32_MIN (-2147483648L) Can anyone point out the issue? 回答1: This is quite subtle. Every integer literal in your program has a type. Which type it has is regulated by