bulkinsert

BULK INSERT error code 3: The system cannot find the path specified

限于喜欢 提交于 2019-12-03 17:36:11
I am trying to bulk insert a local file into a remote MS_SQL database using pyodbc. I am able to connect to the DB and I am able to INSERT INTO tables, as I have done it before. Where I have been having issues is to BULK INSERT . I am using BULK INSERT as a way to speed up my INSERT process. The code looks like this: statement = """ BULK INSERT BulkTable FROM 'C:\\Users\\userName\\Desktop\\Folder\\Book1.csv' WITH ( FIRSTROW=2, FIELDTERMINATOR=',', ROWTERMINATOR = '\\n' ); """ cursor.execute(statement) cnxn.commit() This code yields this error: Traceback (most recent call last): File "tester.py

TSQL: UPDATE with INSERT INTO SELECT FROM

折月煮酒 提交于 2019-12-03 17:09:37
问题 so I have an old database that I'm migrating to a new one. The new one has a slightly different but mostly-compatible schema. Additionally, I want to renumber all tables from zero. Currently I have been using a tool I wrote that manually retrieves the old record, inserts it into the new database, and updates a v2 ID field in the old database to show its corresponding ID location in the new database. for example, I'm selecting from MV5.Posts and inserting into MV6.Posts. Upon the insert, I

Dynamics CRM 2011 Bulk Update

假如想象 提交于 2019-12-03 16:47:45
问题 Running Dynamics CRM 2011 rollout 3. Need to update millions of customer records periodically (delta updates). Using standard update (one by one) takes a few weeks. Also we don't want to touch the DB directly as it may break stuff in the future. Is there a bulk update method in the Dynamics CRM 2011 webservice/REST API we can use? (WhatWhereHow) 回答1: I realize this is post is over 2 years old, but I can add to it in case someone else reads it and has a similar need. Peter Majeed's answer is

Fastest way to create large file in c++?

别说谁变了你拦得住时间么 提交于 2019-12-03 16:11:06
Create a flat text file in c++ around 50 - 100 MB with the content 'Added first line' should be inserted in to the file for 4 million times using old style file io fopen the file for write. fseek to the desired file size - 1. fwrite a single byte fclose the file The fastest way to create a file of a certain size is to simply create a zero-length file using creat() or open() and then change the size using chsize() . This will simply allocate blocks on the disk for the file, the contents will be whatever happened to be in those blocks. It's very fast since no buffer writing needs to take place.

Copying data between Oracle schemas using SQL

依然范特西╮ 提交于 2019-12-03 12:22:21
I'm trying to copy data from one Oracle schema ( CORE_DATA ) into another ( MY_DATA ) using an INSERT INTO (...) SQL statement. What would the SQL statement look like? Prefix your table names with the schema names when logged in as a user with access to both: insert into MY_DATA.table_name select * from CORE_DATA.table_name; Assuming that the tables are defined identically in both schemas, the above will copy all records from the table named table_name in CORE_DATA to the table named table_name in MY_DATA. funny_irony usage: COPY FROM [db] TO [db] [opt] [table] { ([cols]) } USING [sel] [db] :

HyperSQL (HSQLDB): massive insert performance

大兔子大兔子 提交于 2019-12-03 09:02:40
I have an application that has to insert about 13 million rows of about 10 average length strings into an embedded HSQLDB. I've been tweaking things (batch size, single threaded/multithreaded, cached/non-cached tables, MVCC transactions, log_size/no logs, regular calls to checkpoint , ...) and it still takes 7 hours on a 16 core, 12 GB machine. I chose HSQLDB because I figured I might have a substantial performance gain if I put all of those cores to good use but I'm seriously starting to doubt my decision. Can anyone show me the silver bullet? With CACHED tables, disk IO is taking most of the

What are the pitfalls of inserting millions of records into SQL Server from flat file?

十年热恋 提交于 2019-12-03 08:32:19
I am about to start on a journey writing a windows forms application that will open a txt file that is pipe delimited and about 230 mb in size. This app will then insert this data into a sql server 2005 database (obviously this needs to happen swiftly). I am using c# 3.0 and .net 3.5 for this project. I am not asking for the app, just some communal advise here and potential pitfalls advise. From the site I have gathered that SQL bulk copy is a prerequisite, is there anything I should think about (I think that just opening the txt file with a forms app will be a large endeavor; maybe break it

Why aren't my triggers firing during an insert by SSIS?

我是研究僧i 提交于 2019-12-03 08:24:32
问题 I have an SSIS data flow task with an OLE DB Destination component that inserts records into a table with a trigger. When I execute a normal INSERT statement against this table, the trigger fires. When I insert records through the SSIS task the trigger does not fire. How can I get the trigger firing in SSIS? 回答1: Because the OLE DB Destination task uses a bulk insert, triggers are not fired by default. From BULK INSERT (MSDN): If FIRE_TRIGGERS is not specified, no insert triggers execute. One

Implementing a periodically refreshing Cache in Java

只谈情不闲聊 提交于 2019-12-03 07:47:35
My use case is to maintain an in-memory cache over the data stored in a persistent DB. I use the data to populate a list/map of entries on the UI. At any given time, the data displayed on the UI should be as updated as it is possible (well this can be done by the refresh frequency of the cache). Major difference between a regular cache implementation and this particular cache is that it needs a bulk refresh of all the elements at regular intervals and hence is pretty different from an LRU kind of cache. I need to do this implementation in Java and it will be great if there are any existing

TSQL: UPDATE with INSERT INTO SELECT FROM

谁说我不能喝 提交于 2019-12-03 06:07:46
so I have an old database that I'm migrating to a new one. The new one has a slightly different but mostly-compatible schema. Additionally, I want to renumber all tables from zero. Currently I have been using a tool I wrote that manually retrieves the old record, inserts it into the new database, and updates a v2 ID field in the old database to show its corresponding ID location in the new database. for example, I'm selecting from MV5.Posts and inserting into MV6.Posts. Upon the insert, I retrieve the ID of the new row in MV6.Posts and update it in the old MV5.Posts.MV6ID field. Is there a way