Python “denormalize” unicode combining characters

强颜欢笑 提交于 2021-01-27 13:45:17

问题


I'm looking to standardize some unicode text in python. I'm wondering if there's an easy way to get the "denormalized" form of a combining unicode character in python? e.g. if I have the sequence u'o\xaf' (i.e. latin small letter o followed by combining macron), to get ō (latin small letter o with macron). It's easy to go the other way:

o = unicodedata.lookup("LATIN SMALL LETTER O WITH MACRON")
o = unicodedata.normalize('NFD', o)

回答1:


As I have commented, U+00AF is not a combining macron. But you can convert it into U+0020 U+0304 with an NFKD transform.

>>> unicodedata.normalize('NFKD', u'o\u00af')
u'o \u0304'

Then you could remove the space and get ō with NFC.


(Note that NFKD is quite aggressive on decomposition in a way that some semantics can be lost — anything that is "compatible" will be separated out. e.g.

  • '½' (U+008D) ↦ '1' '⁄' (U+2044) '2';
  • '²' (U+00B2) ↦ '2'
  • '①' (U+2460) ↦ '1'

etc.)




回答2:


o = unicodedata.normalize('NFC', o)


来源:https://stackoverflow.com/questions/3126929/python-denormalize-unicode-combining-characters

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!