bulkinsert

MySql Bulk insert

断了今生、忘了曾经 提交于 2019-11-30 02:46:15
问题 I want to insert some 4K rows in the MySql db. I don't want to fire 4k 'insert' queries. Is there any way by which I can fire only one insert query to store those 4k rows in the db. I searched on internet and everywhere I found that the users are doing bulk insert into the db from a file. In my case, I have the data in the memory and I don't want to first write that data to a file to do bulk insert. If I do that then I will add delay in the program. 回答1: You could write a single insert query

Fastest way to insert 30 thousand rows in a temp table on SQL Server with C#

早过忘川 提交于 2019-11-30 01:12:08
问题 I am trying to find out how I can improve my insert performance in a temporary table in SQL Server using c#. Some people are saying that I should use SQLBulkCopy however I must be doing something wrong as it seems to work much slower than simply building an SQL insert string instead. My code to create table using SQLBulkCopy is below: public void MakeTable(string tableName, List<string> ids, SqlConnection connection) { SqlCommand cmd = new SqlCommand("CREATE TABLE ##" + tableName + " (ID int)

efficient bulk update rails database

吃可爱长大的小学妹 提交于 2019-11-30 01:01:36
I'm trying to build a rake utility that will update my database every so often. This is the code I have so far: namespace :utils do # utils:update_ip # Downloads the file frim <url> to the temp folder then unzips it in <file_path> # Then updates the database. desc "Update ip-to-country database" task :update_ip => :environment do require 'open-uri' require 'zip/zipfilesystem' require 'csv' file_name = "ip-to-country.csv" file_path = "#{RAILS_ROOT}/db/" + file_name url = 'http://ip-to-country.webhosting.info/downloads/ip-to-country.csv.zip' #check last time we updated the database. mod_time = '

Azure documentdb bulk insert using stored procedure

随声附和 提交于 2019-11-29 19:55:00
问题 Hi I am using 16 collections to insert around 3-4 million json objects ranging from 5-10k per object.I am using stored procedure to insert these documents.I have 22 Capacity Unit. function bulkImport(docs) { var collection = getContext().getCollection(); var collectionLink = collection.getSelfLink(); // The count of imported docs, also used as current doc index. var count = 0; // Validate input. if (!docs) throw new Error("The array is undefined or null."); var docsLength = docs.length; if

SQLBulkCopy or Bulk Insert

南楼画角 提交于 2019-11-29 17:30:15
问题 I have about 6500 files for a sum of about 17 GB of data, and this is the first time that I've had to move what I would call a large amount of data. The data is on a network drive, but the individual files are relatively small (max 7 MB). I'm writing a program in C#, and I was wondering if I would notice a significant difference in performance if I used BULK INSERT instead of SQLBulkCopy. The table on the server also has an extra column, so if I use BULK INSERT I'll have to use a format file

Elasticsearch bulk insert with NEST returns es_rejected_execution_exception

天大地大妈咪最大 提交于 2019-11-29 16:31:12
I am trying to do bulk insert using .Net API in Elasticsearch and this is the error that I am getting while performing the operation; Error {Type: es_rejected_execution_exception Reason: "rejected execution of org.elasticsearch.transport.TransportService$6@604b47a4 on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@51f4f734[Running, pool size = 4, active threads = 4, queued tasks = 50, completed tasks = 164]]" CausedBy: ""} Nest.BulkError Is it due to the low space in my system or the bulk insert function itself is not working? My

Determine ROW that caused “unexpected end of file” error in BULK INSERT?

风流意气都作罢 提交于 2019-11-29 16:23:52
问题 i am doing a bulk insert: DECLARE @row_terminator CHAR; SET @row_terminator = CHAR(10); -- or char(10) DECLARE @stmt NVARCHAR(2000); SET @stmt = ' BULK INSERT accn_errors FROM ''F:\FullUnzipped\accn_errors_201205080105.txt'' WITH ( firstrow=2, FIELDTERMINATOR = ''|'' , ROWS_PER_BATCH=10000 ,ROWTERMINATOR='''+@row_terminator+''' )' exec sp_executesql @stmt; and am getting the following error: Msg 4832, Level 16, State 1, Line 2 Bulk load: An unexpected end of file was encountered in the data

How to check if enable/disable keys work?

China☆狼群 提交于 2019-11-29 16:13:42
问题 I have a table with an indexed varchar(256) column. For faster bulk insert, I disabled keys, insert more than 10 million entries, and then re-enable the keys after insertion is done. Surprisingly, the enable/disable keys took no time : mysql> alter table xxx disable keys; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> alter table xxx enable keys; Query OK, 0 rows affected, 1 warning (0.00 sec) How do I ensure that enable/disable keys were working properly? 回答1: As you guessed, InnoDB

How to bulk insert into MySQL using C#

北城余情 提交于 2019-11-29 14:36:57
I have previously used the SQLBulkCopy class to load data into a MS SQL Server db. The results were very good, and worked exactly as I intended it to. Now, I'm trying to use a script task in SSIS to bulk load data into a MySQL (5.5.8) database using either an ODBC or ADO.NET connection (recommend?). The columns in my dataset correspond with the columns of the MySQL table. What is the best way to do a bulk insert of a dataset into a MySQL database? You can use the MySqlBulkLoader shipped with the MySQL Connector for .NET: var bl = new MySqlBulkLoader(connection); bl.TableName = "mytable"; bl

How does BULK INSERT work internally?

大憨熊 提交于 2019-11-29 13:45:16
问题 Could someone please explain how does BULK INSERT internally work and why is it much faster than the normal INSERT operations ? Regards, Shishir. 回答1: BULK INSERT runs in-process with the database engine of SQL Server and thus avoids passing data through the network layer of the Client API - this makes it faster than BCP and DTS / SSIS. Also, with BULK INSERT, you can specify the ORDER BY of the data, and if this is the same as the PK of the table, then the locking occurs at a PAGE level.