pyodbc

connecting sqlalchemy to MSAccess

你说的曾经没有我的故事 提交于 2019-11-26 17:15:40
问题 How can I connect to MS Access with SQLAlchemy? In their website, it says connection string is access+pyodbc. Does that mean that I need to have pyodbc for the connection? Since I am a newbie, please be gentle. 回答1: In theory this would be via create_engine("access:///some_odbc_dsn"), but the Access backend hasn't been in service at all since SQLAlchemy 0.5, and it's not clear how well it was working back then either (this is why it's noted as "development" at http://docs.sqlalchemy.org/en

Speeding up pandas.DataFrame.to_sql with fast_executemany of pyODBC

谁都会走 提交于 2019-11-26 15:50:34
I would like to send a large pandas.DataFrame to a remote server running MS SQL. The way I do it now is by converting a data_frame object to a list of tuples and then send it away with pyODBC's executemany() function. It goes something like this: import pyodbc as pdb list_of_tuples = convert_df(data_frame) connection = pdb.connect(cnxn_str) cursor = connection.cursor() cursor.fast_executemany = True cursor.executemany(sql_statement, list_of_tuples) connection.commit() cursor.close() connection.close() I then started to wonder if things can be sped up (or at least more readable) by using data

Writing a csv file into SQL Server database using python

半城伤御伤魂 提交于 2019-11-26 15:33:48
问题 Hi I am trying to write a csv file into a table in SQL Server database using python. I am facing errors when I pass the parameters , but I don't face any error when I do it manually. Here is the code I am executing. cur=cnxn.cursor() # Get the cursor csv_data = csv.reader(file(Samplefile.csv')) # Read the csv for rows in csv_data: # Iterate through csv cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)",rows) cnxn.commit() Error :pyodbc.DataError: ('22001', '[22001]

Can't open lib 'ODBC Driver 13 for SQL Server'? Sym linking issue?

不打扰是莪最后的温柔 提交于 2019-11-26 15:27:47
问题 When I try to connect to a sql server database with pyodbc (on mac): import pyodbc server = '####' database = '####' username = '####@####' password = '#####' driver='{ODBC Driver 13 for SQL Server}' pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1443;DATABASE='+database+';UID='+username+';PWD='+password) I get the following error: Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 13 for SQL Server' : file not found (0) (SQLDriverConnect)") When I path

Output pyodbc cursor results as python dictionary

ⅰ亾dé卋堺 提交于 2019-11-26 15:17:17
问题 How do I serialize pyodbc cursor output (from .fetchone , .fetchmany or .fetchall ) as a Python dictionary? I'm using bottlepy and need to return dict so it can return it as JSON. 回答1: If you don't know columns ahead of time, use cursor.description to build a list of column names and zip with each row to produce a list of dictionaries. Example assumes connection and query are built: >>> cursor = connection.cursor().execute(sql) >>> columns = [column[0] for column in cursor.description] >>>

How to connect pyodbc to an Access (.mdb) Database file

被刻印的时光 ゝ 提交于 2019-11-26 14:47:45
问题 Here's what I've tried: -Find Vista's ODBC Data Source Manager* through search, -Add a new File Data Source*, selecting Driver for Microsoft Access (*.mdb), and selecting my mdb file of interest, -import pyodbc from python shell and try: pyodbc.connect("DSN=<that Data Source I just created>") I get the following error message (Portuguese**): Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Nome da fonte de dados n\xe3o encontrado e nenhum driver padr\xe3o especificado (0)

How to speed up bulk insert to MS SQL Server from CSV using pyodbc

旧巷老猫 提交于 2019-11-26 13:05:43
Below is my code that I'd like some help with. I am having to run it over 1,300,000 rows meaning it takes up to 40 minutes to insert ~300,000 rows. I figure bulk insert is the route to go to speed it up? Or is it because I'm iterating over the rows via for data in reader: portion? #Opens the prepped csv file with open (os.path.join(newpath,outfile), 'r') as f: #hooks csv reader to file reader = csv.reader(f) #pulls out the columns (which match the SQL table) columns = next(reader) #trims any extra spaces columns = [x.strip(' ') for x in columns] #starts SQL statement query = 'bulk insert into

pyodbc - How to perform a select statement using a variable for a parameter

扶醉桌前 提交于 2019-11-26 11:24:00
问题 I\'m trying to iterate through all the rows in a table named Throughput, but for a specific DeviceName (which I have stored in data[\'DeviceName\']. I\'ve tried the following, but it doesn\'t work: for row in cursor.execute(\"select * from Throughput where DeviceName=%s\"), %(data[\'DeviceName\']): EDIT: also tried this but it doesn\'t work: for row in cursor.execute(\"select * from Throughput where(DeviceName), values(?)\", (data[\'DeviceName\']) ): EDIT2: A snippet of my final working code:

Pyodbc - “Data source name not found, and no default driver specified”

半世苍凉 提交于 2019-11-26 09:28:19
问题 I have trouble getting pyodbc work. I have unixodbc , unixodbc-dev , odbc-postgresql , pyodbc packages installed on my Linux Mint 14. I am losing hope to find solution on my own, any help appreciated. See details below: Running: >>> import pyodbc >>> conn = pyodbc.connect(\"DRIVER={PostgreSQL};SERVER=localhost;DATABASE=test;USER=openerp;OPTION=3;\") Gives me: >>> pyodbc.Error: (\'IM002\', \'[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0)

python pip specify a library directory and an include directory

邮差的信 提交于 2019-11-26 06:19:32
问题 I am using pip and trying to install a python module called pyodbc which has some dependencies on non-python libraries like unixodbc-dev, unixodbc-bin, unixodbc. I cannot install these dependencies system wide at the moment, as I am only playing, so I have installed them in a non-standard location. How do I tell pip where to look for these dependencies ? More exactly, how do I pass information through pip of include dirs (gcc -I) and library dirs (gcc -L -l) to be used when building the