How to speed up bulk insert to MS SQL Server from CSV using pyodbc
问题 Below is my code that I\'d like some help with. I am having to run it over 1,300,000 rows meaning it takes up to 40 minutes to insert ~300,000 rows. I figure bulk insert is the route to go to speed it up? Or is it because I\'m iterating over the rows via for data in reader: portion? #Opens the prepped csv file with open (os.path.join(newpath,outfile), \'r\') as f: #hooks csv reader to file reader = csv.reader(f) #pulls out the columns (which match the SQL table) columns = next(reader) #trims