I have some rather large pandas DataFrames and I\'d like to use the new bulk SQL mappings to upload them to a Microsoft SQL Server via SQL Alchemy. The pandas.to_sql method,
Pandas 0.25.1 has a parameter to do multi-inserts, so it's no longer necessary to workaround this issue with SQLAlchemy.
Set method='multi' when calling pandas.DataFrame.to_sql.
In this example, it would be
df.to_sql(table, schema=schema, con=e, index=False, if_exists='replace', method='multi')
Answer sourced from docs here
Worth noting that I've only tested this with Redshift. Please let me know how it goes on other databases so I can update this answer.