bulkinsert

bulk insert mysql - can i use ignore clause? is there a limit to no. of records for bulk insert?

跟風遠走 提交于 2019-12-10 02:11:41
问题 I have a bunch of data that i want to insert and i have decided to use bulk insert for mysql. insert into friends (requestor, buddy) values (value1, value2), (value2, value1), (value3, value4), (value4, value3), ... i would like to know the following: 1) can i use ignore? eg insert ignore into friends (requestor, buddy) values (value1, value2), (value2, value1), (value3, value4), (value4, value3), ... what happens if i have duplicate? will it a) not insert everything? b) insert the records

What permissions are required to bulk insert in SQL Server from a network share with Windows authentication?

白昼怎懂夜的黑 提交于 2019-12-09 17:51:47
问题 I am working on an application which bulk-loads data into a SQL Server 2008 database. It writes a CSV file to a network share then calls a stored procedure which contains a BULK INSERT command. I'm migrating the application to what amounts to a completely new network. In this new world bulk insertion fails with this error: Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "\\myserver\share\subfolder\filename" could not be opened. Operating system error code 5(failed to

Issue with bulk insert

无人久伴 提交于 2019-12-09 15:55:03
问题 I am trying to insert the data from this link to my SQL server https://www.ian.com/affiliatecenter/include/V2/CityCoordinatesList.zip I created the table CREATE TABLE [dbo].[tblCityCoordinatesList]( [RegionID] [int] NOT NULL, [RegionName] [nvarchar](255) NULL, [Coordinates] [nvarchar](4000) NULL ) ON [PRIMARY] And I am running the following script to do the bulk insert BULK INSERT tblCityCoordinatesList FROM 'C:\data\CityCoordinatesList.txt' WITH ( FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR

BULK INSERT error code 3: The system cannot find the path specified

狂风中的少年 提交于 2019-12-09 13:23:06
问题 I am trying to bulk insert a local file into a remote MS_SQL database using pyodbc. I am able to connect to the DB and I am able to INSERT INTO tables, as I have done it before. Where I have been having issues is to BULK INSERT . I am using BULK INSERT as a way to speed up my INSERT process. The code looks like this: statement = """ BULK INSERT BulkTable FROM 'C:\\Users\\userName\\Desktop\\Folder\\Book1.csv' WITH ( FIRSTROW=2, FIELDTERMINATOR=',', ROWTERMINATOR = '\\n' ); """ cursor.execute

How do I achieve best performance using SQLite in Android?

≡放荡痞女 提交于 2019-12-09 09:53:40
问题 I've noticed that there are multiple ways to do SQLite operations (query, insert, update, delete), and some can be faster than the rest. Many websites provide different tips, and some conflict with the others. It seems that using a transaction for bulk insert is somehow faster than doing so in a loop. How come? What is the best way to achieve best performance when using SQLite? How does sqlite work on Android? Confusing with using InserHelper vs ContentValues, as shown here. How does the

What is the optimum bulk item count with InsertBatch method in mongodb c# driver?

烈酒焚心 提交于 2019-12-08 19:32:03
问题 I heard that large batch sizes don't really give any additional performance what is the optimum? 回答1: If you call Insert to insert documents one at a time there is a network round trip for each document. If you call InsertBatch to insert documents in batches there is a network round trip for each batch instead of for each document. InsertBatch is more efficient than Insert because it reduces the number of network round trips. Suppose you had to insert 1,000,000 documents, you could analyze

sqlcmd script with spaces in filename

爷,独闯天下 提交于 2019-12-08 15:44:47
问题 I have a simple SQLCMD script that includes some lines like this: /* Load data into Exampletable */ BULK INSERT dbo.Example /* NOTE: I've tried single AND double quotes here. */ FROM "C:\Example Filepath\test.csv" WITH ( /* skip the first row containing column names */ FIRSTROW = 2, /* specify how fields are separated */ FIELDTERMINATOR = '|', /* specify how lines end */ ROWTERMINATOR = '\n' ) When I run it on the command line, I get an error like this: Sqlcmd: 'C:\Example': Invalid filename.

bulk collect …for all usage

女生的网名这么多〃 提交于 2019-12-08 11:46:28
问题 I want to understand the usage and need for bulk collect forall statements. An example mentioned here In most examples in different web pages; authors first fetch data from a table by using bulk collect statements. After that, they are inserting it into target table by using the forall statement. DECLARE TYPE prod_tab IS TABLE OF products%ROWTYPE; products_tab prod_tab := prod_tab(); BEGIN -- Populate a collection - 100000 rows SELECT * BULK COLLECT INTO products_tab FROM source_products;

Bulk insert in parent and child table using sp_xml_preparedocument

空扰寡人 提交于 2019-12-08 08:18:26
I am using sp_xml_preparedocument for bulk insertion. But I want to do bulk insert in parent table, get scope_identity for each newly inserted row and then bulk insert in child table. I can do this by taking table variable for parent table in procedure and insert data in that table which I supposed to insert in parent table. Now loop through each row in cursor, insert in actual table and then in child table. But is there any batter way without cursor? I want some optimum solution Mikael Eriksson If you are on SQL Server 2008 or later you can use merge as described in this question . Create a

Optimize massive MySQL INSERTs

雨燕双飞 提交于 2019-12-08 07:10:45
问题 I've got an application which needs to run a daily script; the daily script consists in downloading a CSV file with 1,000,000 rows, and inserting those rows into a table. I host my application in Dreamhost. I created a while loop that goes through all the CSV's rows and performs an INSERT query for each one. The thing is that I get a "500 Internal Server Error". Even if I chop it out in 1000 files with 1000 rows each, I can't insert more than 40 or 50 thousand rows in the same loop. Is there