bulkinsert

pyodbc - very slow bulk insert speed

我怕爱的太早我们不能终老 提交于 2019-11-29 05:53:46
问题 With this table: CREATE TABLE test_insert ( col1 INT, col2 VARCHAR(10), col3 DATE ) the following code takes 40 seconds to run: import pyodbc from datetime import date conn = pyodbc.connect('DRIVER={SQL Server Native Client 10.0};' 'SERVER=localhost;DATABASE=test;UID=xxx;PWD=yyy') rows = [] row = [1, 'abc', date.today()] for i in range(10000): rows.append(row) cursor = conn.cursor() cursor.executemany('INSERT INTO test_insert VALUES (?, ?, ?)', rows) conn.commit() The equivalent code with

Bulk Load Files into SQL Azure?

两盒软妹~` 提交于 2019-11-29 05:50:06
I have an ASP.NET app that takes multimegabyte file uploads, writes them to disk, and later MSSQL 2008 loads them with BCP. I would like to move the whole thing to Azure, but since there are no "files" for BCP, can anyone comment on how to get bulk data from an Azure app into SQL Azure? I did see http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx but am not sure if that applies. Thanks. BCP is one way to do it. This post explains it in three easy steps: Bulk insert with Azure SQL Loading data to SQL Azure the fast way You are on the right track. The Bulk Copy API

How to use BULK INSERT when rows depend on foreign keys values?

余生长醉 提交于 2019-11-29 04:08:41
My question is related to this one I asked on ServerFault . Based on this, I've considered the use of BULK INSERT . I now understand that I have to prepare myself a file for each entities I want to save into the database. No matter what, I still wonder whether this BULK INSERT will avoid the memory issue on my system as described in the referenced question on ServerFault. As for the Streets table, it's quite simple! I have only two cities and five sectors to care about as the foreign keys. But then, how about the Addresses? The Addresses table is structured like this: AddressId int not null

sql server Bulk insert csv with data having comma

拥有回忆 提交于 2019-11-29 03:50:59
below is the sample line of csv 012,12/11/2013,"<555523051548>KRISHNA KUMAR ASHOKU,AR",<10-12-2013>,555523051548,12/11/2013,"13,012.55", you can see KRISHNA KUMAR ASHOKU,AR as single field but it is treating KRISHNA KUMAR ASHOKU and AR as two different fields because of comma, though they are enclosed with " but still no luck I tried BULK INSERT tbl FROM 'd:\1.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', FIRSTROW=2 ) GO is there any solution for it? The answer is: you can't do that. See http://technet.microsoft.com/en-us/library/ms188365.aspx . "Importing Data from a CSV file Comma

SQL Bulk Insert with FIRSTROW parameter skips the following line

心已入冬 提交于 2019-11-29 03:17:38
问题 I can't seem to figure out how this is happening. Here's an example of the file that I'm attempting to bulk insert into SQL server 2005: ***A NICE HEADER HERE*** 0000001234|SSNV|00013893-03JUN09 0000005678|ABCD|00013893-03JUN09 0000009112|0000|00013893-03JUN09 0000009112|0000|00013893-03JUN09 Here's my bulk insert statement: BULK INSERT sometable FROM 'E:\filefromabove.txt WITH ( FIRSTROW = 2, FIELDTERMINATOR= '|', ROWTERMINATOR = '\n' ) But, for some reason the only output I can get is:

Sybase JConnect: ENABLE_BULK_LOAD usage

我怕爱的太早我们不能终老 提交于 2019-11-29 02:39:58
Can anyone out there provide an example of bulk inserts via JConnect (with ENABLE_BULK_LOAD ) to Sybase ASE? I've scoured the internet and found nothing. I got in touch with one of the engineers at Sybase and they provided me a code sample. So, I get to answer my own question. Basically here is a rundown, as the code sample is pretty large... This assumes a lot of pre initialized variables, but otherwise it would be a few hundred lines. Anyone interested should get the idea. This can yield up to 22K insertions a second in a perfect world (as per Sybase anyway). SybDriver sybDriver = (SybDriver

How to BULK INSERT a file into a *temporary* table where the filename is a variable?

时光毁灭记忆、已成空白 提交于 2019-11-28 22:38:10
I have some code like this that I use to do a BULK INSERT of a data file into a table, where the data file and table name are variables: DECLARE @sql AS NVARCHAR(1000) SET @sql = 'BULK INSERT ' + @tableName + ' FROM ''' + @filename + ''' WITH (CODEPAGE=''ACP'', FIELDTERMINATOR=''|'')' EXEC (@sql) The works fine for standard tables, but now I need to do the same sort of thing to load data into a temporary table (for example, #MyTable ). But when I try this, I get the error: Invalid Object Name: #MyTable I think the problem is due to the fact that the BULK INSERT statement is constructed on the

efficient bulk update rails database

≯℡__Kan透↙ 提交于 2019-11-28 21:42:19
问题 I'm trying to build a rake utility that will update my database every so often. This is the code I have so far: namespace :utils do # utils:update_ip # Downloads the file frim <url> to the temp folder then unzips it in <file_path> # Then updates the database. desc "Update ip-to-country database" task :update_ip => :environment do require 'open-uri' require 'zip/zipfilesystem' require 'csv' file_name = "ip-to-country.csv" file_path = "#{RAILS_ROOT}/db/" + file_name url = 'http://ip-to-country

Inserts of stateless session of NHibernate are slow

为君一笑 提交于 2019-11-28 21:32:32
问题 It's been a couple of days that I'm working on improving NHibernate Insert performance. I'd read in many posts (such as this one) that stateless session can insert like 1000~2000 records per second.... However the best time that it could insert 1243 records is more than 9 seconds for me : var sessionFactory = new NHibernateConfiguration().CreateSessionFactory(); using (IStatelessSession statelessSession = sessionFactory.OpenStatelessSession()) { statelessSession.SetBatchSize(adjustmentValues

Does SqlBulkCopy automatically start a transaction?

北城以北 提交于 2019-11-28 21:16:50
问题 I am inserting data via SqlBulkCopy like so: public void testBulkInsert(string connection, string table, DataTable dt) { using (SqlConnection con = new SqlConnection(connection)) { con.Open(); using (SqlBulkCopy bulkCopy = new SqlBulkCopy(con)) { bulkCopy.DestinationTableName = table; bulkCopy.WriteToServer(dt); } } } Will this automatically be wrapped in a SQL transaction so that if something goes wrong half way through the DB will be left in the same state as it was before the bulk insert