Bulk Insert A Pandas DataFrame Using SQLAlchemy

前端 未结 10 1824
死守一世寂寞
死守一世寂寞 2020-11-28 22:07

I have some rather large pandas DataFrames and I\'d like to use the new bulk SQL mappings to upload them to a Microsoft SQL Server via SQL Alchemy. The pandas.to_sql method,

10条回答
  •  醉酒成梦
    2020-11-28 22:19

    This might have been answered by then, but I found the solution by collating different answers on this site and aligning with SQLAlchemy's doc.

    1. The table needs to already exist in db1; with an index set up with auto_increment on.
    2. The Class Current needs to align with the dataframe imported in the CSV and the table in the db1.

    Hope this helps whoever comes here and wants to mix Panda and SQLAlchemy in a quick way.

    from urllib import quote_plus as urlquote
    import sqlalchemy
    from sqlalchemy import create_engine
    from sqlalchemy.ext.declarative import declarative_base
    from sqlalchemy import Column, Integer, String, Numeric
    from sqlalchemy.orm import sessionmaker
    import pandas as pd
    
    
    # Set up of the engine to connect to the database
    # the urlquote is used for passing the password which might contain special characters such as "/"
    engine = create_engine('mysql://root:%s@localhost/db1' % urlquote('weirdPassword*withsp€cialcharacters'), echo=False)
    conn = engine.connect()
    Base = declarative_base()
    
    #Declaration of the class in order to write into the database. This structure is standard and should align with SQLAlchemy's doc.
    class Current(Base):
        __tablename__ = 'tableName'
    
        id = Column(Integer, primary_key=True)
        Date = Column(String(500))
        Type = Column(String(500))
        Value = Column(Numeric())
    
        def __repr__(self):
            return "(id='%s', Date='%s', Type='%s', Value='%s')" % (self.id, self.Date, self.Type, self.Value)
    
    # Set up of the table in db and the file to import
    fileToRead = 'file.csv'
    tableToWriteTo = 'tableName'
    
    # Panda to create a lovely dataframe
    df_to_be_written = pd.read_csv(fileToRead)
    # The orient='records' is the key of this, it allows to align with the format mentioned in the doc to insert in bulks.
    listToWrite = df_to_be_written.to_dict(orient='records')
    
    metadata = sqlalchemy.schema.MetaData(bind=engine,reflect=True)
    table = sqlalchemy.Table(tableToWriteTo, metadata, autoload=True)
    
    # Open the session
    Session = sessionmaker(bind=engine)
    session = Session()
    
    # Inser the dataframe into the database in one bulk
    conn.execute(table.insert(), listToWrite)
    
    # Commit the changes
    session.commit()
    
    # Close the session
    session.close()
    

提交回复
热议问题