at all times text encoded in UTF-8 will never give us more than a +50% file size of the same text encoded in UTF-16. true / false?

泪湿孤枕 提交于 2019-12-20 14:43:55

问题


Somewhere I read (rephrased):

If we compare a UTF-8 encoded file VS a UTF-16 encoded file, At some times, the UTF-8 file may give a 50% to 100% larger file size

Am I right to say that the article is wrong because at all times, text encoded in UTF-8 will never give us more than a +50% file size of the same text encoded in UTF-16?


回答1:


The answer is that in UTF-8, ASCII is just 1 byte, but that in general, most Western languages including English use a few characters here and there that require 2 bytes, so actual percentages vary. The Greek and Cyrillic languages all require at least 2 bytes per character in their script when encoded in UTF-8.

Common Eastern languages require for their characters 3 bytes in UTF-8 but 2 in UTF-16. Note however that “uncommon” Eastern characters require 4 bytes in both UTF-8 and UTF-16 alike.

3 is indeed only 50% greater than 2. But that is for a single code point only. It does not apply to an entire file.

The actual percentage is impossible to state with precision, because you do not know whether the balance of code points down in the 1- or 2-byte UTF-8 range, or in the 4-byte UTF-8 range. If there is white space in the Asian text, then that is only byte of UTF-8, and yet it is a costly 2 bytes of UTF-16.

These things do vary. You can only get precise numbers on precise text, not on general text. Code points in Asian text take 1, 2, 3, or 4 bytes of UTF-8, while in UTF-16 they variously require 2 or 4 bytes apiece.

Case Study

Compare the various languages’ Wikipedia pages on Tokyo to see what I mean. Even in Eastern languages, there is still plenty of ASCII going on. This alone makes your figures fluctuate. Consider:

Paras Lines Words Graphs Chars  UTF16 UTF8   8:16 16:8  Language

 519  1525  6300  43120 43147  86296 44023   51% 196%  English
 343   728  1202   8623  8650  17302  9173   53% 189%  Welsh
 541  1722  9013  57377 57404 114810 59345   52% 193%  Spanish
 529  1712  9690  63871 63898 127798 67016   52% 191%  French
 321   837  2442  18999 19026  38054 21148   56% 180%  Hungarian

 202   464   976   7140  7167  14336 11848   83% 121%  Greek
 348   937  2938  21439 21467  42936 36585   85% 117%  Russian

 355   788   613   6439  6466  12934 13754  106%  94%  Chinese, simplified
 209   419   243   2163  2190   4382  3331   76% 132%  Chinese, traditional
 461  1127  1030  25341 25368  50738 65636  129%  77%  Japanese
 410   925  2955  13942 13969  27940 29561  106%  95%  Korean

Each of those is the Tokyo Wikipedia page saved as text, not as HTML. All text is in NFC, not in NFD. The meaning of each of the columns is as follows:

  1. Paras is the number of blankline separated text spans.
  2. Lines is the number of linebreak separated text spans.
  3. Words is the number of whitespace separated text spans.
  4. Graphs is the number of Unicode extended grapheme clusters, sometimes called glyphs. These are user-visible characters.
  5. Chars is the number of Unicode code points. These are, or should be, programmer-visible characters.
  6. UTF16 is how many bytes that takes up when the file is stored as UTF-16.
  7. UTF8 is how many bytes that takes up when the file is stored as UTF-8.
  8. 8:16 is the ratio of UTF-8 size to UTF-16 size, expressed as a percentage.
  9. 16:8 is the ratio of UTF-16 size to UTF-8 size, expressed as a percentage.
  10. Language is which version of the Tokyo page we’re talking about here.

I’ve grouped the languages into Western Latin, Western non-Latin, and Eastern. Observations:

  1. Western languages that use the Latin script suffer terribly upon conversion from UTF-8 to UTF-16, with English suffering the most by expanding by 96% and Hungarian the least by expanding by 80%. All are huge.

  2. Western languages that do not use the Latin script still suffer, but only 15-20%.

  3. Eastern languages DO NOT SUFFER in UTF-8 the way everyone claims that they do! Behold:

    • In Korean and in (simplified) Chinese, you get only 6% bigger in UTF-8 than in UTF-16.
    • In Japanese, you get only 29% bigger in UTF-8 than in UTF-16.
    • The traditional Chinese actually got smaller in UTF-8 than in UTF-16! In fact, it costs 32% to use UTF-16 over UTF-8 for this sample. If you look at the Lines and Words columns, it looks that this might be due to white space usage.

I hope that answers your question. There is simply no +50% to +100% size increase for Eastern languages when encoded in UTF-8 compared to when these same texts are encoded in UTF-16. Only when taking individual code points do you ever see numbers like that, which is a completely unreasonable metric.




回答2:


Yes, you are correct. Code points in the range U+0800..U+FFFF gives a +50% size.

                   UTF-8   UTF-16
U+0000..U+007F       1        2
U+0080..U+07FF       2        2
U+0800..U+FFFF       3        2
U+010000..U+10FFFF   4        4



回答3:


In UTF-8, every code point from 0-127 is stored in a single byte. Only code points 128 and above are stored using 2, 3, in fact, up to 6 bytes.

Though UTF-8 characters may use up to 4 bytes (and more is theoretically possible), it is not used for the Basic Multilingual Plane which includes "almost all modern languages".

Three bytes are needed for the rest of the Basic Multilingual Plane (which contains virtually all characters in common use). Four bytes are needed for characters in the other planes of Unicode, which include less common CJK characters and various historic scripts.

So I guess a 100% overhead, though theoretically possible, is not possible with a typical modern language. You'd have to use something exotic from the Supplementary Multilingual Plane, which uses 4 bytes in UTF-8, to achieve this.

For HTML documents or mixed text it's may not be necessary to switch to UTF-16 to save space:

Characters U+0800 through U+FFFF use three bytes in UTF-8, but only two in UTF-16. As a result, text in (for example) Chinese, Japanese or Hindi could take more space in UTF-8 if there are more of these characters than there are ASCII characters. This happens for pure text, but rarely for HTML documents. For example, both the Japanese UTF-8 and the Hindi Unicode articles on Wikipedia take more space if saved as UTF-16 than the original UTF-8 version.

See the UTF-8 to UTF-16 comparison on Wikipedia.


Joel Spolsky wrote a great article about Unicode, I can really recommend it:

The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)




回答4:


If you have one byte for the character and add on a second byte, I'd call that a 100% increase, not 50%. I think that's what the author means.

If I write X characters with N bytes/character to a file I'll have NX bytes in that file. So you can see where doubling or tripling the number of bytes per character will have a linear effect on the size of the file.



来源:https://stackoverflow.com/questions/6883434/at-all-times-text-encoded-in-utf-8-will-never-give-us-more-than-a-50-file-size

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!