How can I convert a integer to its bit representation. I want to take an integer and return a vector that has contains 1\'s and 0\'s of the integer\'s bit representation.
It's not too hard to solve with a one-liner, but there is actually a standard-library solution.
#include
#include
std::vector< int > get_bits( unsigned long x ) {
std::string chars( std::bitset< sizeof(long) * CHAR_BIT >( x )
.to_string< char, std::char_traits, std::allocator >() );
std::transform( chars.begin(), chars.end(),
std::bind2nd( std::minus(), '0' ) );
return std::vector< int >( chars.begin(), chars.end() );
}
C++0x even makes it easier!
#include
std::vector< int > get_bits( unsigned long x ) {
std::string chars( std::bitset< sizeof(long) * CHAR_BIT >( x )
.to_string( char(0), char(1) ) );
return std::vector< int >( chars.begin(), chars.end() );
}
This is one of the more bizarre corners of the library. Perhaps really what they were driving at was serialization.
cout << bitset< 8 >( x ) << endl; // print 8 low-order bits of x