bulkinsert

Bulk Insert Correctly Quoted CSV File in SQL Server

安稳与你 提交于 2019-11-26 15:25:53
I'm trying to import a correctly quoted CSV file, meaning data is only quoted if it contains a comma, e.g.: 41, Terminator, Black 42, "Monsters, Inc.", Blue I observe that the first row imports correctly, but the second row errors in a manner that suggests the quoted comma was treated as a field separator. I have seen suggestions such as this one SQL Bulk import from CSV to change the field terminator FIELDTERMINATOR='","' However, my CSV file only quotes fields that need it, so I do not believe that suggestion would work. Can SQL Server's BULK IMPORT statement import a correctly quoted CSV

Accelerate bulk insert using Django's ORM?

你说的曾经没有我的故事 提交于 2019-11-26 15:09:55
问题 I'm planning to upload a billion records taken from ~750 files (each ~250MB) to a db using django's ORM. Currently each file takes ~20min to process, and I was wondering if there's any way to accelerate this process. I've taken the following measures: Use @transaction.commit_manually and commit once every 5000 records Set DEBUG=False so that django won't accumulate all the sql commands in memory The loop that runs over records in a single file is completely contained in a single function

Cannot bulk load. The file “c:\data.txt” does not exist

倖福魔咒の 提交于 2019-11-26 14:48:21
问题 I'm having a problem reading data from a text file into ms sql. I created a text file in my c:\ called data.txt, but for some reason ms sql server cannot find the file. I get the error "Cannot bulk load. The file "c:\data.txt" does not exist." Any ideas? The data file (yes I know the data looks crappy, but in the real world thats how it comes from clients): 01-04 10.338,18 0,00 597.877,06- 5 0,7500 62,278- 06-04 91.773,00 9.949,83 679.700,23- 1 0,7500 14,160- 07-04 60.648,40 149.239,36 591

Bulk insert using stored procedure

早过忘川 提交于 2019-11-26 14:21:44
问题 I have a query which is working fine: BULK INSERT ZIPCodes FROM 'e:\5-digit Commercial.csv' WITH ( FIRSTROW = 2 , FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) but now I want to create a stored procedure for it. I have written below code to make its stored procedure: create proc dbo.InsertZipCode @filepath varchar(500)='e:\5-digit Commercial.csv' as begin BULK INSERT ZIPCodes FROM @filepath WITH ( FIRSTROW = 2 , FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) end but its showing error: Msg 102,

MySql bulk load command line tool

二次信任 提交于 2019-11-26 14:09:48
问题 Does MySql have a bulk load command line tool like bcp for SQLServer and sqlldr for Oracle? I know there's a SQL command LOAD INFILE or similar but I sometimes need to bulk load a file that is on a different box to the MySQL database. 回答1: mysqlimport. takes the same connection parameters as the mysql command line shell. Make sure to use the -L flag to use a file on the local file system, otherwise it will (strangely) assume the file is on the server. There is also an analogous variant to the

How to speed up bulk insert to MS SQL Server from CSV using pyodbc

旧巷老猫 提交于 2019-11-26 13:05:43
Below is my code that I'd like some help with. I am having to run it over 1,300,000 rows meaning it takes up to 40 minutes to insert ~300,000 rows. I figure bulk insert is the route to go to speed it up? Or is it because I'm iterating over the rows via for data in reader: portion? #Opens the prepped csv file with open (os.path.join(newpath,outfile), 'r') as f: #hooks csv reader to file reader = csv.reader(f) #pulls out the columns (which match the SQL table) columns = next(reader) #trims any extra spaces columns = [x.strip(' ') for x in columns] #starts SQL statement query = 'bulk insert into

BULK INSERT with identity (auto-increment) column

為{幸葍}努か 提交于 2019-11-26 12:25:10
问题 I am trying to add bulk data in database from CSV file. Employee table has a column ID (PK) auto-incremented. CREATE TABLE [dbo].[Employee]( [id] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](50) NULL, [Address] [varchar](50) NULL ) ON [PRIMARY] I am using this query: BULK INSERT Employee FROM \'path\\tempFile.csv \' WITH (FIRSTROW = 2,KEEPIDENTITY,FIELDTERMINATOR = \',\' , ROWTERMINATOR = \'\\n\'); .CSV File - Name,Address name1,addr test 1 name2,addr test 2 but it results in this error

Use binary COPY table FROM with psycopg2

☆樱花仙子☆ 提交于 2019-11-26 12:07:52
问题 I have tens of millions of rows to transfer from multidimensional array files into a PostgreSQL database. My tools are Python and psycopg2. The most efficient way to bulk instert data is using copy_from. However, my data are mostly 32-bit floating point numbers (real or float4), so I\'d rather not convert from real → text → real. Here is an example database DDL: CREATE TABLE num_data ( id serial PRIMARY KEY NOT NULL, node integer NOT NULL, ts smallint NOT NULL, val1 real, val2 double

How can I insert 10 million records in the shortest time possible?

吃可爱长大的小学妹 提交于 2019-11-26 11:44:41
I have a file (which has 10 million records) like below: line1 line2 line3 line4 ....... ...... 10 million lines So basically I want to insert 10 million records into the database. so I read the file and upload it to SQL Server. C# code System.IO.StreamReader file = new System.IO.StreamReader(@"c:\test.txt"); while((line = file.ReadLine()) != null) { // insertion code goes here //DAL.ExecuteSql("insert into table1 values("+line+")"); } file.Close(); but insertion will take a long time. How can I insert 10 million records in the shortest time possible using C#? Update 1: Bulk INSERT: BULK

Bulk insert with text qualifier in SQL Server

旧巷老猫 提交于 2019-11-26 11:37:19
问题 I am trying to bulk insert few records in a table test from a CSV file , CREATE TABLE Level2_import (wkt varchar(max), area VARCHAR(40), ) BULK INSERT level2_import FROM \'D:\\test.csv\' WITH ( FIRSTROW = 2, FIELDTERMINATOR = \',\', ROWTERMINATOR = \'\\n\' ) The bulk insert code should rid of the first row and insert the data into the table . it gets rid of first row alright but gets confused in the delimiter section . The first column is wkt and the column value is double quoted and has