Deflate and fixed Huffman codes

早过忘川 提交于 2021-01-27 11:29:54

问题


I'm trying to implement a deflate compressor and I have to decide whether to compress a block using the static huffman code or create a dynamic one.

What is the rationale behind the length associated with the static code?

(this is the table included in the rfc) Lit Value Bits --------- ---- 0 - 143 8 144 - 255 9 256 - 279 7 280 - 287 8 I thought static code was more biased towards plain ascii text, instead it looks like it prefers by a tiny bit the compression of the rle length

What is a good heuristic to decide whether to use static code?

I was thinking to build a distribution of probabilities from a sample of the input data and calculate a distance (maybe EMD?) from the probabilities derived from the static code.


回答1:


I would guess that the creator of the code took a large sample of literals and lengths from compressed data, likely including executables along with text, and found typical code lengths over the large set. They were then approximated with the table shown. However the author passed away many years ago, so we'll never know for sure.

You don't need a heuristic. Once you have done the work to find matching strings, it is comparatively very fast to compute the number of bits in the block for both a dynamic and static representation. Then simply pick the smaller one. Or the static one if equal (decodes faster).




回答2:


I don't know about rationale, but there was a small amount of irrationale in choosing the static code lengths:

In the table in your question, the maximum static code number there is 287, but the DEFLATE specification only allows up to code 285, meaning code lengths have wastefully been assigned to two invalid codes. (And not even the longest ones either!) It's a similar story with the table for distance codes, with 32 codes having lengths assigned, but only 30 valid.

So there are some easy improvements that could have been made, but that said, without some prior knowledge of the data, it's not really possible to produce anything that's massively more efficient generally. The "flatness" of the table (no code longer than 9 bits) reduces the worst-case performance to 1 extra bit per byte of uncompressable data.

I think the main rationale behind the groupings is that by keeping group sizes to a multiple of 8, it's possible to tell which group a code belongs to by looking at the 5 most significant bits, which also tells you its length, along with what value to add to immediately get the code value itself

00000 00   .. 00101 11     7 bits  + 256   -> (256..279)
00110 000  .. 10111 111    8 bits  -  48   -> (  0..144)
11000 000  .. 11000 111    8 bits  +  78   -> (280..287)
11001 0000 .. 11111 1111   9 bits  - 256   -> (144..255)

So in theory you could set up a lookup table with 32 entries to quickly read in the codes, but it's an uncommon case and probably not worth optimising for.

There are only really two cases (with some overlap) where Fixed Huffman blocks are likely to be the most efficient:

  • where the input size in bytes is very small, Static Huffman can be more efficient than Uncompressed, because Uncompressed uses a 32-bit header, while Fixed Huffman needs only a 7-bit footer, plus 1 bit potential overhead per byte.

  • where the output size is very small (ie. small-ish, highly compressible data), Static Huffman can be more efficient than Dynamic Huffman - again because Dynamic Huffman uses a certain amount of space for an additional header. (A practical minimum header size is difficult to calculate, but I'd say at least 64 bits, probably more.)

That said, I've found they are actually helpful from a developer's perspective, because it's very easy to implement a Deflate-compatible function using Static Huffman blocks, and to iteratively improve from there to get more efficient algorithms working.



来源:https://stackoverflow.com/questions/46654777/deflate-and-fixed-huffman-codes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!