问题
I have a spreadsheet with 11,000 rows and 10 columns. I am trying to copy each row with selected columns, add additional information per line and output to a txt.
Unfortunately, I am having really bad performance issues, files start to slug after 100 rows and kill my processor. Is there a way to speed this up or use better methodology? I am already using read_only=True
and data_only=True
Most memory intensive part is iterating through each cell :
for i in range(probeStart, lastRow+1):
dataRow =""
for j in range (1,col+2):
dataRow = dataRow + str(sheet.cell(row=i, column=j).value) + "\t"
sigP = db.get(str(sheet.cell(row= i, column=1).value), "notfound") #my additional information
a = str(sheet.cell(row = i, column = max_column-1).value) +"\t"
b = str(sheet.cell(row = i, column = max_column).value) + "\t"
string1 = dataRow + a + b + sigP + "\n"
w.write(string1)
回答1:
Question: Is there a way to speed this up or use better methodology?
Try the following to see if this improve performance:
Note: Didn't know the Values of
col
andmax_column
!
My Example uses 4 Columns and skips Column C.Data:
['A1', 'B1', 'C1', 'D1'],
['A2', 'B2', 'C2', 'D2']
from openpyxl.utils import range_boundaries
min_col, min_row, max_col, max_row = range_boundaries('A1:D2')
for row_cells in ws.iter_rows(min_col=min_col, min_row=min_row,
max_col=max_col, max_row=max_row):
# Slice Column Values up to B
data = [cell.value for cell in row_cells[:2]]
# Extend List with sliced Column Values from D up to End
data.extend([cell.value for cell in row_cells[3:]])
# Append db.get(Column A.value)
data.append(db.get(row_cells[0].value, "notfound"))
# Join all List Values delimited with \t
print('{}'.format('\t'.join(data)))
# Write to CSV
#w.write(data)
Output:
A1 B1 D1 notfound
A2 B2 D2 notfound
Tested with Python: 3.4.2 - openpyxl: 2.4.1
来源:https://stackoverflow.com/questions/45456300/handling-very-large-files-with-openpyxl-python