I wrote a \'simple\' (it took me 30 minutes) program that converts decimal number to binary. I am SURE that there\'s a lot simpler way so can you show me? Here\'s the code:<
Okay.. I might be a bit new to C++, but I feel the above examples don't quite get the job done right.
Here's my take on this situation.
char* DecimalToBinary(unsigned __int64 value, int bit_precision)
{
int length = (bit_precision + 7) >> 3 << 3;
static char* binary = new char[1 + length];
int begin = length - bit_precision;
unsigned __int64 bit_value = 1;
for (int n = length; --n >= begin; )
{
binary[n] = 48 | ((value & bit_value) == bit_value);
bit_value <<= 1;
}
for (int n = begin; --n >= 0; )
binary[n] = 48;
binary[length] = 0;
return binary;
}
@value = The Value we are checking.
@bit_precision = The highest left most bit to check for.
@Length = The Maximum Byte Block Size. E.g. 7 = 1 Byte and 9 = 2 Byte, but we represent this in form of bits so 1 Byte = 8 Bits.
@binary = just some dumb name I gave to call the array of chars we are setting. We set this to static so it won't be recreated with every call. For simply getting a result and display it then this works good, but if let's say you wanted to display multiple results on a UI they would all show up as the last result. This can be fixed by removing static, but make sure you delete [] the results when you are done with it.
@begin = This is the lowest index that we are checking. Everything beyond this point is ignored. Or as shown in 2nd loop set to 0.
@first loop - Here we set the value to 48 and basically add a 0 or 1 to 48 based on the bool value of (value & bit_value) == bit_value. If this is true the char is set to 49. If this is false the char is set to 48. Then we shift the bit_value or basically multiply it by 2.
@second loop - Here we set all the indexes we ignored to 48 or '0'.
SOME EXAMPLE OUTPUTS!!!
int main()
{
int val = -1;
std::cout << DecimalToBinary(val, 1) << '\n';
std::cout << DecimalToBinary(val, 3) << '\n';
std::cout << DecimalToBinary(val, 7) << '\n';
std::cout << DecimalToBinary(val, 33) << '\n';
std::cout << DecimalToBinary(val, 64) << '\n';
std::cout << "\nPress any key to continue. . .";
std::cin.ignore();
return 0;
}
00000001 //Value = 2^1 - 1
00000111 //Value = 2^3 - 1.
01111111 //Value = 2^7 - 1.
0000000111111111111111111111111111111111 //Value = 2^33 - 1.
1111111111111111111111111111111111111111111111111111111111111111 //Value = 2^64 - 1.
SPEED TESTS
Original Question's Answer: "Method: toBinary(int);"
Executions: 10,000 , Total Time (Milli): 4701.15 , Average Time (Nanoseconds): 470114
My Version: "Method: DecimalToBinary(int, int);"
//Using 64 Bit Precision.
Executions: 10,000,000 , Total Time (Milli): 3386 , Average Time (Nanoseconds): 338
//Using 1 Bit Precision.
Executions: 10,000,000, Total Time (Milli): 634, Average Time (Nanoseconds): 63