How could a 32bpp image ( ARGB ) could be converted to a 16bpp image ( ARGB ) using Java\'s libraries? For my curiosity, at pixel level, what does this conversion do? If I have
A 32-bit AARRGGBB value converted to a 16-bit ARGB value would be something like this:
int argb = ((aarrggbb & 0x000000F0) >> 4)
| ((aarrggbb & 0x0000F000) >> 8)
| ((aarrggbb & 0x00F00000) >> 12)
| ((aarrggbb & 0xF0000000) >> 16);
It sticks everything in the lower 16 bits and leaves the upper 16 bits as 0.
For each channel, you lose the lower 4-bits of colour info, the upper ones being somewhat more important. The colours would be quantized to the nearest 4-bit equivalent value resulting in a visually unpleasant colour banding effect across the image.
Incidentally, 16-bit colour does not normally include an alpha component. Normally (Though not always) it breaks down as 5 bits for red, 6 bits for green (Since our eyes are most sensitive to green/blue colours) and 5 bits for blue.
This conversion would lose only 2 or 3 bits of information on each channel instead of 4, and would assume that the source pixel contained no alpha.