How to do a Python split() on languages (like Chinese) that don't use whitespace as word separator?

后端 未结 9 2470
梦如初夏
梦如初夏 2020-12-03 03:25

I want to split a sentence into a list of words.

For English and European languages this is easy, just use split()

>>> \"This is a sentence.         


        
9条回答
  •  既然无缘
    2020-12-03 04:17

    You can do this but not with standard library functions. And regular expressions won't help you either.

    The task you are describing is part of the field called Natural Language Processing (NLP). There has been quite a lot of work done already on splitting Chinese words at word boundaries. I'd suggest that you use one of these existing solutions rather than trying to roll your own.

    • Chinese NLP
    • chinese - The Stanford NLP (Natural Language Processing) Group

    Where does the ambiguity come from?

    What you have listed there is Chinese characters. These are roughly analagous to letters or syllables in English (but not quite the same as NullUserException points out in a comment). There is no ambiguity about where the character boundaries are - this is very well defined. But you asked not for character boundaries but for word boundaries. Chinese words can consist of more than one character.

    If all you want is to find the characters then this is very simple and does not require an NLP library. Simply decode the message into a unicode string (if it is not already done) then convert the unicode string to a list using a call to the builtin function list. This will give you a list of the characters in the string. For your specific example:

    >>> list(u"这是一个句子")
    

提交回复
热议问题