pandas-to-sql

pyodbc to sqlalchemy connection

心已入冬 提交于 2021-02-19 07:16:26
问题 I am trying to switch a pyodbc connection to sqlalchemy. The working pyodbc connection is: import pyodbc con = 'DRIVER={ODBC Driver 11 for SQL Server};SERVER=server.com\pro;DATABASE=DBase;Trusted_Connection=yes' cnxn = pyodbc.connect(con) cursor = cnxn.cursor() query = "Select * from table" cursor.execute(query) I tried: from sqlalchemy import create_engine dns = 'mssql+pyodbc://server.com\pro/DBase?driver=SQL+Server' engine = create_engine(dns) engine.execute('Select * from table').fetchall(

I Get TypeError: cannot use a string pattern on a bytes-like object when using to_sql on dataframe python 3

可紊 提交于 2021-02-10 22:19:09
问题 Hi I am trying to write a dataframe to my sql database using df.to_sql however I am getting the error message: TypeError: cannot use a string pattern on a bytes-like object. I am using Python 3. I am using a path on my drive which I can unfortuantly not share. But it works fine when I just want to open the csv file using. df = pd.read_csv(path, delimiter=';', engine='python', low_memory=True, encoding='utf-8-sig') I am using the encoding item because otherwise their is a strange object at my

How to write pandas dataframe to oracle database using to_sql?

て烟熏妆下的殇ゞ 提交于 2020-02-26 18:36:11
问题 I'm a new oracle learner. I'm trying to write a pandas dataframe into an oracle table. After I have made research online, I found the code itself is very simple, but I don't know why my code doesn't work. I have read the pandas dataframe from my local file: import cx_Oracle import pandas as pd import os dir_path = os.path.dirname(os.path.realpath("__file__")) df = pd.read_csv(dir_path+"/sample.csv") Now print df, the dataframe df shold be like this: DATE YEAR MONTH SOURCE DESTINATION 0 11/1

How to write pandas dataframe to oracle database using to_sql?

偶尔善良 提交于 2020-02-26 18:35:57
问题 I'm a new oracle learner. I'm trying to write a pandas dataframe into an oracle table. After I have made research online, I found the code itself is very simple, but I don't know why my code doesn't work. I have read the pandas dataframe from my local file: import cx_Oracle import pandas as pd import os dir_path = os.path.dirname(os.path.realpath("__file__")) df = pd.read_csv(dir_path+"/sample.csv") Now print df, the dataframe df shold be like this: DATE YEAR MONTH SOURCE DESTINATION 0 11/1

How to write pandas dataframe to oracle database using to_sql?

廉价感情. 提交于 2020-02-26 18:34:46
问题 I'm a new oracle learner. I'm trying to write a pandas dataframe into an oracle table. After I have made research online, I found the code itself is very simple, but I don't know why my code doesn't work. I have read the pandas dataframe from my local file: import cx_Oracle import pandas as pd import os dir_path = os.path.dirname(os.path.realpath("__file__")) df = pd.read_csv(dir_path+"/sample.csv") Now print df, the dataframe df shold be like this: DATE YEAR MONTH SOURCE DESTINATION 0 11/1

append the data to already existing table in pandas using to_sql

不打扰是莪最后的温柔 提交于 2020-01-11 05:44:05
问题 I have the following data frame ipdb> csv_data country sale date trans_factor 0 India 403171 12/01/2012 1 1 Bhutan 394096 12/01/2012 2 2 Nepal super 12/01/2012 3 3 madhya 355883 12/01/2012 4 4 sudan man 12/01/2012 5 As of now i am using below code to insert data in table, like if table already exists, drop it and create new table csv_file_path = data_mapping_record.csv_file_path original_csv_header = pandas.read_csv(csv_file_path).columns.tolist() csv_data = pandas.read_csv(csv_file_path,

Pandas to_sql returning 'relation already exists' error when using if_exists='append'

烈酒焚心 提交于 2019-12-25 02:54:02
问题 I am trying to insert a data frame daily into a table in Redshift. The to_sql command works to create the table, but returns an error when I try to append to the existing table even when using if_exists = 'append' argument. Versions: pandas: 0.23.4 sqlalchemy: 1.2.15 psycopg2: 2.7.6.1 Python: 3.6.7 I am also using the monkey patch to speed up inserts outlined here: https://github.com/pandas-dev/pandas/issues/8953 but without this patch the insert takes prohibitively long (several hours).

Pandas 0.20.2 to_sql() using MySQL

自作多情 提交于 2019-12-24 21:15:33
问题 I'm trying to write a dataframe to a MySQL table but am getting a (111 Connection refused) error. I followed the accepted answer here: Writing to MySQL database with pandas using SQLAlchemy, to_sql Answer's code: import pandas as pd import mysql.connector from sqlalchemy import create_engine engine = create_engine('mysql+mysqlconnector://[user]:[pass]@[host]:[port]/[schema]', echo=False) data.to_sql(name='sample_table2', con=engine, if_exists = 'append', index=False) ...and the create_engine(

Speeding up pandas.DataFrame.to_sql with fast_executemany of pyODBC

孤人 提交于 2019-12-17 02:41:48
问题 I would like to send a large pandas.DataFrame to a remote server running MS SQL. The way I do it now is by converting a data_frame object to a list of tuples and then send it away with pyODBC's executemany() function. It goes something like this: import pyodbc as pdb list_of_tuples = convert_df(data_frame) connection = pdb.connect(cnxn_str) cursor = connection.cursor() cursor.fast_executemany = True cursor.executemany(sql_statement, list_of_tuples) connection.commit() cursor.close()

Pandas to_sql 'append' to an existing table causes Python crash

杀马特。学长 韩版系。学妹 提交于 2019-12-13 03:34:45
问题 My problem is essentially this: When I try to use to_sql with if_exists = 'append' and name is set to a table on my SQL Server that already exists python crashes. This is my code: @event.listens_for(engine, 'before_cursor_execute') def receive_before_cursor_execute(conn, cursor, statement, params, context, executemany): if executemany: cursor.fast_executemany = True df.to_sql(name = 'existingSQLTable', con = engine, if_exists = 'append', index = False, chunksize = 10000, dtype = dataTypes) I