I need to check the value of the least significant bit (LSB) and most significant bit (MSB) of an integer in C/C++. How would I do this?
You can do something like this:
#include
int main(int argc, char **argv)
{
int a = 3;
std::cout << (a & 1) << std::endl;
return 0;
}
This way you AND
your variable with the LSB, because
3: 011
1: 001
in 3-bit representation. So being AND
:
AND
-----
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
You will be able to know if LSB is 1 or not.
edit: find MSB.
First of all read Endianess article to agree on what MSB
means. In the following lines we suppose to handle with big-endian notation.
To find the MSB
, in the following snippet we will focus applying a right shift until the MSB
will be AND
ed with 1
.
Consider the following code:
#include
#include
int main(int argc, char **argv)
{
unsigned int a = 128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(unsigned int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(unsigned int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '0', because the 32-bit representation of
// unsigned int 128 is:
// 00000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
If you print MSB
outside of the cycle you will get 0
.
If you change the value of a
:
unsigned int a = UINT_MAX; // found in
MSB
will be 1
, because its 32-bit representation is:
UINT_MAX: 11111111111111111111111111111111
However, if you do the same thing with a signed integer things will be different.
#include
#include
int main(int argc, char **argv)
{
int a = -128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '1', because the 32-bit representation of
// int -128 is:
// 10000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
As I said in the comment below, the MSB
of a positive integer is always 0
, while the MSB
of a negative integer is always 1
.
You can check INT_MAX 32-bit representation:
INT_MAX: 01111111111111111111111111111111
Now. Why the cycle uses sizeof()
?
If you simply do the cycle as I wrote in the comment: (sorry for the =
missing in comment)
for (; a != 0; a >>= 1)
MSB = a & 1;
you will get 1
always, because C++ won't consider the 'zero-pad bits' (because you specified a != 0
as exit statement) higher than the highest 1
. For example for 32-bit integers we have:
int 7 : 00000000000000000000000000000111
^ this will be your fake MSB
without considering the full size
of the variable.
int 16: 00000000000000000000000000010000
^ fake MSB