I\'m currently working on a simulation of the MIPS processor in C++ for a comp architecture class and having some problems converting from decimal numbers to binary (signed
And why can't you just cast the int to a uint? Then it's very easy to generate the binary string because you don't have to worry about the sign bit. Same goes for converting a binary string to an int: build it as a uint, and then cast it to int:
string DecimalToBinaryString(int a)
{
uint b = (uint)a;
string binary = "";
uint mask = 0x80000000u;
while (mask > 0)
{
binary += ((b & mask) == 0) ? '0' : '1';
mask >>= 1;
}
cout<
And, of course, you can apply the mentioned optimizations, like pre-allocating the string buffer, etc.
Going the other way:
uint b = 0;
for (int i = 31; i >=0; --i)
{
b <<= 1;
if (a.at(i) == '1')
b |= 1;
}
int num = (int)b;