How to tokenize natural English text in an input file in python?

后端 未结 3 1011
不知归路
不知归路 2021-01-03 05:26

I want to tokenize input file in python please suggest me i am new user of python .

I read the some thng about the regular expression but still some con

3条回答
  •  渐次进展
    2021-01-03 06:03

    Using NLTK

    If your file is small:

    • Open the file with the context manager with open(...) as x,
    • then do a .read() and tokenize it with word_tokenize()

    [code]:

    from nltk.tokenize import word_tokenize
    with open ('myfile.txt') as fin:
        tokens = word_tokenize(fin.read())
    

    If your file is larger:

    • Open the file with the context manager with open(...) as x,
    • read the file line by line with a for-loop
    • tokenize the line with word_tokenize()
    • output to your desired format (with the write flag set)

    [code]:

    from __future__ import print_function
    from nltk.tokenize import word_tokenize
    with open ('myfile.txt') as fin, open('tokens.txt','w') as fout:
        for line in fin:
            tokens = word_tokenize(line)
            print(' '.join(tokens), end='\n', file=fout)
    

    Using SpaCy

    from __future__ import print_function
    from spacy.tokenizer import Tokenizer
    from spacy.lang.en import English
    
    nlp = English()
    tokenizer = Tokenizer(nlp.vocab)
    
    with open ('myfile.txt') as fin, open('tokens.txt') as fout:
        for line in fin:
            tokens = tokenizer.tokenize(line)
            print(' '.join(tokens), end='\n', file=fout)
    

提交回复
热议问题