How does UTF-16 achieve self-synchronization?

梦想的初衷 提交于 2020-06-29 05:09:15

问题


I know that UTF-16 is a self-synchronizing encoding scheme. I also read the below Wiki, but did not quite get it.

Self Synchronizing Code

Can you please explain me with an example of UTF-16?


回答1:


In UTF-16 characters outside of the BMP are represented using a surrogate pair in with the first code unit (CU) lies between 0xD800—0xDBFF and the second one between 0xDC00—0xDFFF. Each of the CU represents 10 bits of the code point. Characters in the BMP is encoded as itself.

Now the synchronization is easy. Given the position of any arbitrary code unit:

  • If the code unit is in the 0xD800—0xDBFF range, it's the first code unit of two, just read the next one and decode. Voilà, we have a full character outside of BMP
  • If the code unit is in the 0xDC00—0xDFFF range, it's the second code unit of two, just go back one unit to read the first part, or advance to the next unit to skip the current character
  • If it's in neither of those ranges then it's a character in BMP. We don't need to do anything more

In UTF-16 CU is the unit, i.e. the smallest element. We work at the CU level and read the CU one-by-one instead of byte-by-byte. Because of that along with historical reasons UTF-16 is only self-synchronizable at CU level

The point of self-synchronization is to know whether we're in the middle of something immediately instead of having to read again from the start and check. UTF-16 allows us to do that

Since the ranges for the high surrogates, low surrogates, and valid BMP characters are disjoint, it is not possible for a surrogate to match a BMP character, or for (parts of) two adjacent characters to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units. UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as Shift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string (UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte).

https://en.wikipedia.org/wiki/UTF-16#Description

Of course that means UTF-16 may be not suitable for working over a medium without error correction/detection like a bare network environment. However in a proper local environment it's a lot better than working without self-synchronization. For example in DOS/V for Japanese every time you press Backspace you must iterate from the start to know which character was deleted because in the awful Shift-JIS encoding there's no way to know how long the character before the cursor is without a length map



来源:https://stackoverflow.com/questions/52226539/how-does-utf-16-achieve-self-synchronization

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!