Handling very large files with openpyxl python
问题 I have a spreadsheet with 11,000 rows and 10 columns. I am trying to copy each row with selected columns, add additional information per line and output to a txt. Unfortunately, I am having really bad performance issues, files start to slug after 100 rows and kill my processor. Is there a way to speed this up or use better methodology? I am already using read_only=True and data_only=True Most memory intensive part is iterating through each cell : for i in range(probeStart, lastRow+1): dataRow