What's the best re-sampling algorithm I can use to divide an image into half its original size. Speed is of primary importance but it shouldn't degrade quality too bad. I'm basically trying to generate an image pyramid.
I was originally planning to skip pixels. Is this the best way to go? From what I've read the image produced by pixel skipping is too sharp. Could someone who has tried this comment. My images contain map data sort of like this.
Skipping pixels will result in aliasing, where high frequency changes (such as alternating light/dark bands) will convert to low frequencies (such as constant light or dark).
The quickest way to downsize to half without aliasing is to average 2x2 pixels into a single pixel. Better results can be had with more sophisticated reduction kernels, but they will come at the expense of speed.
Edit: Here are some examples of the techniques discussed so far.
Skipping every other pixel - you can see that the results aren't very good by looking at the legend on the left side. It's almost unreadable:
Averaging every 2x2 grid - The text is now sharp and readable:
Gaussian blur, as suggested by R. - a little blurrier, but more readable up to a point. The amount of blur can be adjusted to give different results:
R. is also correct about the Gamma curve affecting the results, but this should only be visible in the most demanding applications. My examples were done without gamma correction.
For downscaling, area-averaging (see Mark's answer) is close to the best you'll get.
The main other contender is gaussian, with a slightly larger radius. This will increase blurring a little bit, which could be seen as a disadvantage, but would make the blurring more uniform rather than dependent on the alignment of pixels mod 2.
In case it's not immediately clear what I mean, consider the pixel patterns 0,0,2,2,0,0 and 0,0,0,2,2,0. With area-averaging, they'd downscale to 0,2,0 and 0,1,1, respectively - that is, one will be sharp and bright while the other will be blurred and dim. Using a longer filter, both will be blurred, but they'll appear more similar, which presumably matters to human observers.
Another issue to consider is gamma. Unless gamma is linear, two pixels of intensity k will have much less total intensity than a single pixel of intensity 2*k. If your filter performs sufficient blurring, it might not matter so much, but with the plain area-average filter it can be a major issue. The only work-around I know is to apply and reverse the gamma curve before and after scaling...
If speed is an issue, as mentioned, I recommend to take a 2x2 Block and calculate the average as the resulting pixel. The quality is not the best that can be achieved, but close to. You can provoke this algorithm to show its weaknesses, but on most images you won't see a difference that would justify the many times higher computation time. You also dont have any memory overhead. If color resolution can be lowered to 6bit per channel, here is a pretty fast way that prevents you from decomposing the ARGB channels (here assuming 32bit ARGB):
destPixel[x,y] = ((sourcePixel[2*x ,2*y ]>>2)&0x3f3f3f3f) +
((sourcePixel[2*x+1,2*y ]>>2)&0x3f3f3f3f) +
((sourcePixel[2*x ,2*y+1]>>2)&0x3f3f3f3f) +
((sourcePixel[2*x+1,2*y+1]>>2)&0x3f3f3f3f);
Side effect of this alogrithm is, that if saved as PNG, the file size gets smaller. This is how it looks like:
I tried to generalise Thilo Köhler's solution (but in Python):
STRIDE = 2
MASK = 0x3F3F3F3F
color = 0
for Δx, Δy in itertools.product(range(STRIDE), repeat=2):
color += (get_pixel(x + Δx, y + Δy) // STRIDE) & MASK
This works fine for scaling by 2 (quarter size result), but doesn't work for scaling by 3 or 4 or other int values. Is it possible to generalise this?
BTW for non-Pythonistas the for loop above is equivalent to this (except that the first version is scalable by changing the STRIDE):
for Δx, Δy in [(0, 0), (0, 1), (1, 0), (1, 1)]:
color += (get_pixel(x + Δx, y + Δy) // STRIDE) & MASK
I'm using 32-bit ARGB values.
The NetPBM suite includes a utility called pamscale, which provides a few options for downsampling. It is open source, so you can try the various options and then copy the algorithm you like best (or just use libnetpbm).
来源:https://stackoverflow.com/questions/6133957/image-downsampling-algorithms