问题
I'm adding the text contained in the second column of a number of csv files into one list to later perform sentiment analysis on each item in the list. My code is fully working for large csv files at the moment, but the sentiment analysis I'm performing on the items in the list takes too long which is why I want to only read the first 200 rows per csv file. The code looks as follows:
import nltk, string, lumpy
import math
import glob
from collections import defaultdict
columns = defaultdict(list)
from nltk.corpus import stopwords
import math
import sentiment_mod as s
import glob
lijst = glob.glob('21cf/*.csv')
tweets1 = []
for item in lijst:
stopwords_set = set(stopwords.words("english"))
with open(item, encoding = 'latin-1') as d:
reader1=csv.reader(d)
next(reader1)
for row in reader1:
tweets1.extend([row[2]])
words_cleaned = [" ".join([words for words in sentence.split() if 'http' not in words and not words.startswith('@')]) for sentence in tweets1]
words_filtered = [e.lower() for e in words_cleaned]
words_without_stopwords = [word for word in words_filtered if not word in stopwords_set]
tweets1 = words_without_stopwords
tweets1 = list(filter(None, tweets1))
How do I make sure to only read over the first 200 rows per csv file with the csv reader?
回答1:
The shortest and most idiomatic way is probably to use itertools.islice:
import itertools
...
for row in itertools.islice(reader1, 200):
...
回答2:
You can just add a count, and break when in reaches 200, or add a loop with a range of 200.
Define a variable right before your for loop for rows starts:
count = 0
Then inside your loop:
count = count + 1
if count == 200:
break
回答3:
Pandas is a popular module for manipulating data, like CSVs. Using pandas this is how you could limit the number of rows.
import pandas as pd
# If you only want to read the first 200 (non-header) rows:
pd.read_csv(..., nrows=200)
来源:https://stackoverflow.com/questions/50490257/only-reading-first-n-rows-of-csv-file-with-csv-reader-in-python