pyodbc

Connect SQL Server to Python 3 with pyodbc

此生再无相见时 提交于 2019-12-11 11:56:42
问题 import pyodbc cnxn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=LENOVO-PCN;DATABASE=testing;') cursor = cnxn.cursor() cursor.execute("select Sales from Store_Inf") row = cursor.fetchone() if row: print (row) I try using python 3 with module pyodbc to connect SQL Server Express. My codes gave a error: ('08001', '[08001] [Microsoft][SQL Server Native Client 11.0]Named Pipes Provider: Could not open a connection to SQL Server [2]. (2) (SQLDriverConnect)') Any idea for this?

Create SQL Server temporary tables inside Python script

非 Y 不嫁゛ 提交于 2019-12-11 10:14:52
问题 I'm using pypyodbc to connect to the SQL Server. I want to store my resultset into a temporary table like we do in SQL. But everytime I try to do it - I get this error message: pypyodbc.ProgrammingError: ('24000', '[24000] [Microsoft][ODBC SQL Server Driver]Invalid cursor state') This is what I'm trying to query: querytest = "SELECT id into #temp from Team" cursor1.execute(querytest); var = cursor1.fetchall() print(var[:10]) 回答1: The query SELECT id into #temp from Team does not return a

Querying from Microsoft SQL to a Pandas Dataframe

∥☆過路亽.° 提交于 2019-12-11 09:26:09
问题 I am trying to write a program in Python3 that will run a query on a table in Microsoft SQL and put the results into a Pandas DataFrame. My first try of this was the below code, but for some reason I don't understand the columns do not appear in the order I ran them in the query and the order they appear in and the labels they are given as a result change, stuffing up the rest of my program: import pandas as pd, pyodbc result_port_mapl = [] # Use pyodbc to connect to SQL Database con_string =

Stored Procedure Multiple Tables - PYODBC - Python

大城市里の小女人 提交于 2019-12-11 09:03:19
问题 I am trying to execute a stored Procedure with 20 different table outputs. These outputs range from 3-6 columns and 10-100 rows. If not pyodbc, how else would I be able to iterate through all these tables without the same structure? connection = pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=dsdrsossql2;DATABASE=TableauDev;Trusted_Connection=yes;') sql = "{call dbo.DGGrading}" cur = connection.cursor() rows = cur.execute(sql,).fetchall() columns = [column[0] for column in cur

pyodbc return multiple cursors from stored procedure with DB2

穿精又带淫゛_ 提交于 2019-12-11 07:43:41
问题 I have a python program that calls a stored procedure from db2 database. I am using results = cursor.fetchall() to process the results of my stored procedure. However, my stored procedure returns two cursors. results only contains the first one. I need a way to loop through as many cursors as I want. I was hoping fetchmany() would be my answer but it is not. I need to be able to do multiple result sets as the program I am writing can only call one stored procedure. It would take a lot to go

pandas.read_sql() is MUCH slower when using SQLAlchemy than pyodbc

谁说我不能喝 提交于 2019-12-11 07:26:04
问题 I am trying to read a small table from SQL and I'm looking into switching over to SQLAlchemy from pyodbc to be able to use pd.to_sql() When I compare the two, the sql alchemy is much slower. s_py = """\ import pandas as pd import pyodbc cxn = pyodbc.connect('DRIVER={SQL SERVER};SERVER=.\;DATABASE=PPIS;UID=sa;PWD=pwd') """ s_alch = """\ import pandas as pd import sqlalchemy cxn = sqlalchemy.create_engine("mssql+pyodbc://sa:pwd@./PPIS?driver=SQL+Server") """ timeit.timeit('pd.read_sql("SELECT *

Speed up insert to SQL Server from CSV file without using BULK INSERT or pandas to_sql

那年仲夏 提交于 2019-12-11 07:25:19
问题 I want to put a Pandas dataframe as a whole in a table in a MS SQL Server database. BULK INSERT is not allowed for common users like myself. I am using pyodbc to connect to my database. I am using Pandas 0.13.1. I read somewhere that from version 0.14 you can use the to_sql method and thus that it is unavailable for my pandas dataframe. Therefore I used an iterator. My dataframe has 2 columns: Col1 and Col2. My code is working and looks like: from pyodbc import connect import pandasas pd df =

'ascii' codec can't encode character error

﹥>﹥吖頭↗ 提交于 2019-12-11 06:42:55
问题 I request your kind assistance in tackling an error. I am trying to save MS Access database tables as CSV files using Python. I seem to be running into an error I do not know how to fix. I have looked through different posts on Stack overflow and tried them but nothing fulfilling. Please provide your kind assistance. import pyodbc import csv conn_string = ("DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\Access\\permissions.accdb") conn = pyodbc.connect(conn_string) cursor = conn

How do you use pyodbc in Azure Machine Learning Workbench

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-11 06:09:18
问题 I'm trying to use pyodbc to import a dataframe in Azure ML Workbench. This works in local runs, but not for docker. It fails when trying to establish a connection to the SQL Server, because the driver is not present. cnxn = pyodbc.connect('DRIVER='{ODBC Driver 13 for SQL Server}';PORT=1433;SERVER='+server+';PORT=1443;DATABASE='+database+';UID='+username+';PWD='+ password) Error Message: pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 13 for SQL Server' :

single database connection throughout the python application (following singleton pattern)

时光怂恿深爱的人放手 提交于 2019-12-11 05:57:57
问题 My Question is what is the best way to maintain the single database connection in the entire application? Using Singleton Pattern? How? Conditions that are needed to be taken care of: In case of multiple requests, I should be using the same connection In case connection is closed, create a new connection If the connection has timed-out, on new request my code should create a new connection. The driver to my Database is not supported by the Django ORM. And due to same driver related issues, I