Reading sentences from a text file and appending into a list with Python 3 [closed]

£可爱£侵袭症+ 提交于 2021-02-07 10:52:43

问题


I'm having trouble figuring out how I would take a text file of a lengthy document, and append each sentence within that text file to a list. Not all sentences will end in a period, so all end characters would have to be taken into consideration, but there could also be a '.' within a sentence, so I couldn't just cutoff searching through a sentence at a period. I'm assuming this could be fixed by also adding a condition where after the period it should be followed by a space, but I have no idea how to set this up so I get each sentence from the text file put into a list as an element.

The program I'm writing is essentially going to allow for user input of a keyword search (key), and input for a number of sentences to be returned (value) before and after the sentence where the keyword is found. So it's more or less a research assistant so the user won't have to read a massive text file to find the information they want.

From what I've learned so far, putting the sentences into a list would be the easiest way to go about this, but I can't figure out the first part to it. If I could figure out this part, the rest should be easy to put together.

So I guess in short,

If I have a document of Sentence. Sentence. Sentence. Sentence. Sentence. Sentence. Sentence. Sentence. Sentence. Sentence. Sentence. Sentence.

I need a list of the document contents in the form of:

sentence_list = [Sentence, Sentence, Sentence, Sentence, Sentence, Sentence, Sentence, Sentence, Sentence, Sentence, Sentence, Sentence]

回答1:


That's a pretty hard problem, and it doesn't have an easy answer. You could try and write a regular expression that captures all of the known cases, but complex regular expressions tend to be hard to maintain and debug. There are a number of existing libraries that may help you with this. Most notably is The Natural Language Toolkit, which has many tokenizers built in. You can install this with pip e.g.

pip install nltk

And then getting your sentences would be a fairly straightforward (although highly customizable) affair. Here's a simple example using the provided sentence tokenizer

import nltk
with(open('text.txt', 'r') as in_file):
    text = in_file.read()
    sents = nltk.sent_tokenize(text)

I'm not entirely clear how your sentences are delimited if not by normal punctuation, but running the above code on your text I get:

[ "I'm having trouble figuring out how I would take a text file of a lengthy document, and append each sentence within that text file to a list.",

"Not all sentences will end in a period, so all end characters would have to be taken into consideration, but there could also be a '.'",

"within a sentence, so I couldn't just cutoff searching through a sentence at a period.",

"I'm assuming this could be fixed by also adding a condition where after the period it should be followed by a space, but I have no idea how to set this up so I get each sentence from the text file put into a list as an element.\n\n" ]

But fails on inputs like: ["This is a sentence with.", "a period right in the middle."]

while passing on inputs like: ["This is a sentence wit.h a period right in the middle"]

I don't know if you're going to get much better than that right out of the box, though. From the nltk code:

A sentence tokenizer which uses an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences; and then uses that model to find sentence boundaries. This approach has been shown to work well for many European languages.

So the nltk solution is actually using machine learning to build a model of a sentence. Much better than a regular expression, but still not perfect. Damn natural languages. >:(

Hope this helps :)




回答2:


First read the text file into a container. Then use regular expressions to parse the document. This is just a sample on how split() methods can be used for breaking the strings

import re
file = open("test.txt", "r")
doclist = [ line for line in file ]
docstr = '' . join(doclist)
sentences = re.split(r'[.!?]', docstr)


来源:https://stackoverflow.com/questions/27209278/reading-sentences-from-a-text-file-and-appending-into-a-list-with-python-3

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!