bulkinsert

Bulk inserts into sqlite db on the iphone

僤鯓⒐⒋嵵緔 提交于 2019-12-06 06:45:27
问题 I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong

How to handle multiple updates / deletes with Elasticsearch?

五迷三道 提交于 2019-12-06 06:42:57
I need to update or delete several documents. When I update I do this: I first search for the documents, setting a greater limit for the returned results (let’s say, size: 10000). For each of the returned documents, I modify certain values. I resent to elasticsearch the whole modified list (bulk index). This operation takes place until point 1 no longer returns results. When I delete I do this: I first search for the documents, setting a greater limit for the returned results (let’s say, size: 10000) I delete every found document sending to elasticsearch _id document (10000 requests) This

import bulk data into MySQL

感情迁移 提交于 2019-12-06 06:39:46
问题 So I'm trying to import some sales data into my MySQL database. The data is originally in the form of a raw CSV file, which my PHP application needs to first process, then save the processed sales data to the database. Initially I was doing individual INSERT queries, which I realized was incredibly inefficient (~6000 queries taking almost 2 minutes ). I then generated a single large query and INSERT ed the data all at once. That gave us a 3400% increase in efficiency, and reduced the query

Filter null or empty input using LOAD DATA INFILE in MySQL

可紊 提交于 2019-12-06 05:19:29
I have some very large files (millions of records) that I need to load into a database. They are of the form: word1\tblahblahblah word2\tblahblah word3\tblahblah word4 word5\tblahblah ... My problem, is that I want to ignore the lines that have no second record (the 'blahblah's'), like word4. I'm currently using the following query to load the file: LOAD DATA LOCAL INFILE 'file' IGNORE INTO TABLE tablename COLUMNS TERMINATED BY '\t' LINES TERMINATED BY '\n' (col1, col2); This has the functionality I want, except that it still accepts the null values. Is there a way to skip the word4 type lines

BULK INSERT fails with row terminator on last row

浪子不回头ぞ 提交于 2019-12-06 05:00:05
I'm importing a CSV compiled using cygwin shell commands into MS SQL 2014 using: BULK INSERT import from 'D:\tail.csv' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\r', FIRSTROW = 1) GO I have confirmed that each row contains a \r\n. If I leave a CR/LF on the last row the bulk import fails with Msg 4832: Bulk load: An unexpected end of file was encountered in the data file. If I end the file at the end of the last data row then the bulk import succeeds. For very large CSVs a kludgy way around this problem is to find the number of rows and use the LASTROW setting for the BULK INSERT. Is there

Bulk insert, asp.net

。_饼干妹妹 提交于 2019-12-06 03:09:01
I have a need to take in a list of ID numbers corresponding to a member. Their can be anywhere from 10 to 10,000 being processed at any given time. I have no problem collecting the data, parsing the data and loading it in to a DataTable or anything (C#) but I want to do some operations in the database. What is the best way to insert all of this data into a table? I am pretty sure I don't want run a for each statement and insert 10,000 times. I've used the SqlBulkCopy class before to do a couple million adds. It seemed pretty handy. You may not want to execute an INSERT 10,000 times, but you

OrientDB GraphED - SQL insert edge between two (select vertex RID)s? Or alternative approach for very large import

杀马特。学长 韩版系。学妹 提交于 2019-12-06 01:37:34
For example, two simple vertices in an OrientDB Graph: orientdb> CREATE DATABASE local:/databases/test admin admin local graph; Creating database [local:/databases/test] using the storage type [local]... Database created successfully. Current database is: local:/graph1/databases/test orientdb> INSERT INTO V (label,in,out) VALUES ('vertexOne',[],[]); Inserted record 'V#6:0{label:vertexOne,in:[0],out:[0]} v0' in 0.001000 sec(s). orientdb> INSERT INTO V (label,in,out) VALUES ('vertexTwo',[],[]); Inserted record 'V#6:1{label:vertexTwo,in:[0],out:[0]} v0' in 0.000000 sec(s). Is there a way to

How to insert into documentDB from Excel file containing 5000 records?

醉酒当歌 提交于 2019-12-05 22:33:58
I have an Excel file that originally had about 200 rows, and I was able to convert the excel file to a data table and everything got inserted into the documentdb correctly. The Excel file now has 5000 rows and it is not inserting after 30-40 records insertion and rest of all the rows are not inserted into the documentdb I found some exception as below. Microsoft.Azure.Documents.DocumentClientException: Exception: Microsoft.Azure.Documents.RequestRateTooLargeException, message: {"Errors":["Request rate is large"]} My code is : Service service = new Service(); foreach(data in exceldata) /

TRY doesn't CATCH error in BULK INSERT

不羁的心 提交于 2019-12-05 20:07:26
Why in the following code TRY didn't catch the error and how can I catch this error? BEGIN TRY BULK INSERT [dbo].[tblABC] FROM 'C:\temp.txt' WITH (DATAFILETYPE = 'widechar',FIELDTERMINATOR = ';',ROWTERMINATOR = '\n') END TRY BEGIN CATCH select error_message() END CATCH I just get this: Msg 4860, Level 16, State 1, Line 2 Cannot bulk load. The file "C:\temp.txt" does not exist. This is one option that helps to catch this error: BEGIN TRY DECLARE @cmd varchar(1000) SET @cmd = 'BULK INSERT [dbo].[tblABC] FROM ''C:\temp.txt'' WITH (DATAFILETYPE = ''widechar'',FIELDTERMINATOR = '';'',ROWTERMINATOR

SQL Import skip duplicates

柔情痞子 提交于 2019-12-05 19:43:49
I am trying to do a bulk upload into a SQL server DB. The source file has duplicates which I want to remove, so I was hoping that the operation would automatically upload the first one, then discard the rest. (I've set a unique key constraint). Problem is, the moment a duplicate upload is attempted the whole thing fails and gets rolled back. Is there any way I can just tell SQL to keep going? Try to bulk insert the data to the temporary table and then SELECT DISTINCT as @madcolor suggested or INSERT INTO yourTable SELECT * FROM #tempTable tt WHERE NOT EXISTS (SELECT 1 FROM youTable yt WHERE yt