uint64

Understanding “o^(o-2r)” formula for generating sliding piece moves using unsigned bitboards?

狂风中的少年 提交于 2021-01-28 20:14:32
问题 What I Am Trying To Do I am trying to perform some bitwise operations to create a chess engine. To make this engine, I need to be able to generate moves for pieces, like rooks. There is a handy formula for creating a bitboard of squares available for the rook to move to: bitboardOfOccupiedSquares ^ (bitboardOfOccupiedSquares - 2 * bitboardOfPieceToMove) . Consider the following chess board position: I am trying to generate all of the squares that the rook on h1 can move to. So this should be

Why uint64_t cannot show pow(2, 64) - 1 properly?

旧街凉风 提交于 2020-01-30 13:04:50
问题 I'm trying to understand why uint64_t type can not show pow(2,64)-1 properly. The cplusplus standard is 199711L. I checked the pow() function under C++98 standard which is double pow (double base , double exponent); float pow (float base , float exponent); long double pow (long double base, long double exponent); double pow (double base , int exponent); long double pow (long double base, int exponent); So I wrote the following snippet double max1 = (pow(2, 64) - 1); cout << max1 << endl;

Why uint64_t cannot show pow(2, 64) - 1 properly?

雨燕双飞 提交于 2020-01-30 13:04:30
问题 I'm trying to understand why uint64_t type can not show pow(2,64)-1 properly. The cplusplus standard is 199711L. I checked the pow() function under C++98 standard which is double pow (double base , double exponent); float pow (float base , float exponent); long double pow (long double base, long double exponent); double pow (double base , int exponent); long double pow (long double base, int exponent); So I wrote the following snippet double max1 = (pow(2, 64) - 1); cout << max1 << endl;

Go: convert uint64 to int64 without loss of information

会有一股神秘感。 提交于 2019-12-30 09:34:48
问题 The problem with the following code: var x uint64 = 18446744073709551615 var y int64 = int64(x) is that y is -1 . Without loss of information, is the only way to convert between these two number types to use an encoder and decoder? buff bytes.Buffer Encoder(buff).encode(x) Decoder(buff).decode(y) Note, I am not attempting a straight numeric conversion in your typical case. I am more concerned with maintaining the statistical properties of a random number generator. 回答1: Seeing -1 would be

How do you deal with numbers larger than UInt64 (C#)

瘦欲@ 提交于 2019-12-28 05:59:26
问题 In C#, how can one store and calculate with numbers that significantly exceed UInt64's max value (18,446,744,073,709,551,615)? 回答1: By using a BigInteger class; there's one in the the J# libraries (definitely accessible from C#), another in F# (need to test this one), and there are freestanding implementations such as this one in pure C#. 回答2: Can you use the .NET 4.0 beta? If so, you can use BigInteger. Otherwise, if you're sticking within 28 digits, you can use decimal - but be aware that

Swift - UInt behaviour

人走茶凉 提交于 2019-12-24 16:07:27
问题 Using my 64 bit Mac (Macbook Pro 2009), this code in Xcode playground is acting weird: let var1 = UInt32.max // 4,294,967,295 let var2 = UInt64.max // -1 --> why? var var3: UInt = UInt.max // -1 --> why? var3 = -1 // generates an error. setting var3 to -1 should generate an error. But in the declaration line, it became equal to -1 . 回答1: Apparently this is just a bug in swift playground and according to @Anton, printing the variables shows the correct value. 来源: https://stackoverflow.com

Multiplying __int64's

狂风中的少年 提交于 2019-12-22 09:12:05
问题 Can someone explain to me (in detail) how to multiply two __int64 objs and check if the result will fit in __int64. Note: Do not use any compiler or processor dependent routines. 回答1: not assuming a and b are positive: __int64 a,b; //... __int64 tmp_result = abs(a) * abs(b) ; if ( ( a && b ) && ( ( tmp_result < abs(a) || tmp_result < abs(b) ) || ( tmp_result / abs(a) != abs(b)) || ( a == TYPE_MIN && b != 1) || ( b == TYPE_MIN && a != 1) ) ) std::cout << "overflow"; __int64 result = a * b;

Why does an uint64_t needs more memory than 2 uint32_t's when used in a class? And how to prevent this?

↘锁芯ラ 提交于 2019-12-22 05:33:22
问题 I have made the following code as an example. #include <iostream> struct class1 { uint8_t a; uint8_t b; uint16_t c; uint32_t d; uint32_t e; uint32_t f; uint32_t g; }; struct class2 { uint8_t a; uint8_t b; uint16_t c; uint32_t d; uint32_t e; uint64_t f; }; int main(){ std::cout << sizeof(class1) << std::endl; std::cout << sizeof(class2) << std::endl; std::cout << sizeof(uint64_t) << std::endl; std::cout << sizeof(uint32_t) << std::endl; } prints 20 24 8 4 So it's fairly simple to see that one

SQL bigint hash to match c# int64 hash [duplicate]

☆樱花仙子☆ 提交于 2019-12-19 17:13:12
问题 This question already has an answer here : SQL Server varbinary bigint with BitConverter.ToInt64 values are different (1 answer) Closed 6 years ago . I am trying to create a universal hashing alogrithim that hashes a string as a 64 bit int. I am able to hash the strings correctly: sql: select convert ( varchar(64), HASHBYTES ( 'SHA1', 'google.com' ), 2 ) returns BAEA954B95731C68AE6E45BD1E252EB4560CDC45 C# System.Security.Cryptography.SHA1 c = System.Security.Cryptography.SHA1.Create(); System

Swift converts C's uint64_t different than it uses its own UInt64 type

依然范特西╮ 提交于 2019-12-18 06:48:08
问题 I am in the process of porting an application from (Objective-)C to Swift but have to use a third-party framework written in C. There are a couple of incompatibilities like typedefs that are interpreted as Int but have to be passed to the framework's functions as UInts or the like. So to avoid constant casting operations throughout the entire Swift application I decided to transfer the C header files to Swift, having all types as I I need them to be in one place. I was able to transfer nearly