pyodbc

Passing Parameters to Stored Procedures using PyODBC

做~自己de王妃 提交于 2019-12-01 11:03:35
I'm using pyODBC to connect to a SQL Server 2005 express database. I've created a stored procedure in SQLServer express that takes 2 string parameters e.g stored_proc(inpu1, input2) these parameters are of type datetime. I have tested the stored proc using management studio and it does return an appropriate result. However when i try to call the stored procedure from python(i'm using Eclipse) I get the error. pyodbc.DataError: ('22018', '[22018] [Microsoft][SQL Native Client]Invalid character value for cast specification (0) (SQLExecDirectW)')2/9/2011 12:00:03 2/9/2011 12:20:03 The function i

Parameterized query with pyodbc and mysql8 returns 0 for columns with int data types

大兔子大兔子 提交于 2019-12-01 07:35:16
问题 Python: 2.7.12 pyodbc: 4.0.24 OS: Ubuntu 16.4 DB: MySQL 8 driver: MySQL 8 Expected behaviour: resultset should have numbers in columns with datatype int Actual Behaviour: All of the columns with int data type have 0's (If parameterised query is used) Here's the queries - 1. cursor.execute("SELECT * FROM TABLE where id =7") Result set: [(7, 1, None, 1, u'An', u'Zed', None, u'Ms', datetime.datetime(2016, 12, 20, 0, 0), u'F', u'Not To Be Disclosed', None, None, u'SPRING', None, u'4000', datetime

Install unixODBC >= 2.3.1 on Linux Redhat/CentOS for msodbcsql17

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-01 06:50:26
I try to install msodbcsql17 on AWS EC2 with CentOS/RedHat (Linux). These are the steps, I have followed, from Microsoft ( LINK ): sudo su #Download appropriate package for the OS version #Choose only ONE of the following, corresponding to your OS version #RedHat Enterprise Server 6 curl https://packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo #RedHat Enterprise Server 7 curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/mssql-release.repo exit sudo yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts sudo ACCEPT

Inserting pyodbc.Binary data (BLOB) into SQL Server image column

百般思念 提交于 2019-12-01 04:34:01
问题 I am trying to insert binary data into a column of image datatype in a SQL Server database. I know varbinary(max) is the preferred data type, but I don't have rights to alter the schema. Anyhow, I am reading the contents of a file and wrapping it in pyodbc.Binary() as below: f = open('Test.ics', 'rb') ablob = f.read().encode('hex') ablob = pyodbc.Binary(ablob) When I print repr(ablob) I see the correct value bytearray(b'424547494e3a5 . . . (ellipsis added). However, after inserting insertSQL

Install unixODBC >= 2.3.1 on Linux Redhat/CentOS for msodbcsql17

余生颓废 提交于 2019-12-01 04:24:39
问题 I try to install msodbcsql17 on AWS EC2 with CentOS/RedHat (Linux). These are the steps, I have followed, from Microsoft (LINK): sudo su #Download appropriate package for the OS version #Choose only ONE of the following, corresponding to your OS version #RedHat Enterprise Server 6 curl https://packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo #RedHat Enterprise Server 7 curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/mssql

Python is slow when iterating over a large list

眉间皱痕 提交于 2019-12-01 03:16:32
I am currently selecting a large list of rows from a database using pyodbc. The result is then copied to a large list, and then i am trying to iterate over the list. Before I abandon python, and try to create this in C#, I wanted to know if there was something I was doing wrong. clientItems.execute("Select ids from largetable where year =?", year); allIDRows = clientItemsCursor.fetchall() #takes maybe 8 seconds. for clientItemrow in allIDRows: aID = str(clientItemRow[0]) # Do something with str -- Removed because I was trying to determine what was slow count = count+1 Some more information:

Error 28000: Login failed for user DOMAIN\\\\user with pyodbc

可紊 提交于 2019-12-01 03:00:54
I am trying to use Python to connect to a SQL database by using Window authentication. I looked at some of the posts here (e.g., here ), but the suggested methods didn't seem to work. For example, I used the following code: cnxn = pyodbc.connect(driver='{SQL Server Native Client 11.0}', server='SERVERNAME', database='DATABASENAME', trusted_connection='yes') But I got the following error: Error: ('28000', "[28000] [Microsoft][SQL Server Native Client 11.0][SQL Server] Login failed for user 'DOMAIN\\username'. (18456) (SQLDriverConnect); [28000] [Microsoft] [SQL Server Native Client 11.0][SQL

pyodbc/sqlAchemy enable fast execute many

喜夏-厌秋 提交于 2019-12-01 00:35:34
In response to my question How to speed up data wrangling A LOT in Python + Pandas + sqlAlchemy + MSSQL/T-SQL I was kindly directed to Speeding up pandas.DataFrame.to_sql with fast_executemany of pyODBC by @IljaEverilä. NB For test purposes I am only reading/writing 10k rows. I added the event listener and a) the function is called but b) clearly executemany is not set as the IF fails and cursor.fast_executemay is not set. def namedDbSqlAEngineCreate(dbName): # Create an engine and switch to the named db # returns the engine if successful and None if not # 2018-08-23 added fast_executemany

Connecting to SQL server from SQLAlchemy using odbc_connect

我是研究僧i 提交于 2019-12-01 00:06:43
I am new to Python and SQL server. I have been trying to insert a pandas df into our database for the past 2 days without any luck. Can anyone please help me debugging the errors. I have tried the following import pyodbc from sqlalchemy import create_engine engine = create_engine('mssql+pyodbc:///?odbc_connect=DRIVER={SQL Server};SERVER=bidept;DATABASE=BIDB;UID=sdcc\neils;PWD=neil!pass') engine.connect() df.to_sql(name='[BIDB].[dbo].[Test]',con=engine, if_exists='append') However at the engine.connect() line I am getting the following error sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('08001', '

pyodbc remove unicode strings

百般思念 提交于 2019-11-30 23:55:57
I'm using pyodbc to connect sqlserver and below is my connection string..Everything is proper but the results are returned as a unicode string..I have the CHARSET=UTF8 in the connection string but still its returning as unicode string? Is there any way that I can limit it using the connection paramter itself? I don't want to call a extra function to convert my unicode to normal strings. import pyodbc as p connstr= 'DRIVER={SQL Server};SERVER=USERNAME\SQLEXPRESS;DATABASE=TEST;Trusted_Connection=yes;unicode_results=True;CHARSET=UTF8' conn = p.connect(connstr) print conn cursor = conn.cursor()