Algorithm to compress jpeg to achieve a specified target image file size

泄露秘密 提交于 2020-05-27 09:24:09

问题


I have a couple of javascript libraries (angular-ahdin, J-I-C), that I can use to compress an image the user uploaded before I submit it to the back end.

All libraries I've seen take a quality parameter and use jpeg compression to reduce the file. You have no idea before the compression what file size the resulting image will be, based on any quality value.

My idea is to use a "binary search" type algorithm to try different quality percents until I finally get an image that is just under the target max file size.

It would start with 50% jpeg quality. If the compressed image is under the target file size, then go to 75% quality, else go to 25% quality, and so on. It would hit the target file size to a granularity of 1% within 6 iterations guaranteed, then I would stop.

Assuming there is not a library which already has this feature, is there a better way than binary search? Is there any image research which indicates a better seed value than 50%?


回答1:


Binary search could be good enough for your problem, but it is implicitly assuming that the compressed file size is a linear function of the parameter Q, which it probably isn't. Therefore, there may be better performing options.

If you have a representative sample of the type of images you will be dealing with, you may want to calculate an average size-as-a-function-of-Q function. Then you can see what an optimal starting point would be, and how fast the size will change as you change Q.

In any case, the quantization tables of JPEG are usually calculated as an "scaled" version of the standard IJG table. The entries of the table T[i] are typically scaled as a function of Q as

S = Q < 50 ? 5000/Q : 200 - 2Q
T_Q[i] = (S*T[i] + 50) / 100

Hence, if your library follows this approach, it could make sense to use a binary (linear) search for Q >= 50 and adapt the linear search for the Q<50 cases.

Finally, if you can use a progressive compression algorithm such as JPEG2000, you would directly avoid this problem as a target bitrate (equivalenty, compressed file size) can be used as a parameter.




回答2:


Due to the way the JPEG algorithm works your 'binary search' approach is the only viable approach if the output size is really this critical. JPEG was simply not designed with this purpose in mind and uses the quality setting to discard information, not to calculate towards a specific goal.

As the compression ratio will vary WILDLY depending on image contents and complexity there is also not really a better bet than starting with 50%, you'd have to analyze the image to make a better guess, and then you could just as well compress it.

The only possible improvement I see on the binary search is to make it a distributed search. So if 50% yields 40KB, and 75% yields 80KB, and you're targetting below 50KB it's a pretty safe bet to try (50%+floor(25*1/4)) = 56% next instead of 62.5%. As JPEG compression ratio is not a linear function of the quality setting however I doubt this will be much more efficient in real world scenarios.



来源:https://stackoverflow.com/questions/34732104/algorithm-to-compress-jpeg-to-achieve-a-specified-target-image-file-size

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!