bulk

Importing a CSV file using BULK INSERT command into SQL Server table

笑着哭i 提交于 2019-12-02 01:38:20
I have CSV file which have a couple of data columns. The CSV file looks like field1: Test1 field2: Test2 field3: Test3, Test4, Test5 In this case which library can I use as field terminator I mean if I use this query to insert CSV file into shopifyitem table as you assume that the data field inserted not correctly BULK INSERT shopifyitem FROM 'c:\test.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) So which field terminator can I use? Thank you so much in advance.... I don't think you're going to be able to import that format without some type of pre-processing. As Aaron implied,

How to use elasticsearch.helpers.streaming_bulk

爷,独闯天下 提交于 2019-12-01 16:34:33
Can someone advice how to use function elasticsearch.helpers.streaming_bulk instead elasticsearch.helpers.bulk for indexing data into elasticsearch. If I simply change streaming_bulk instead of bulk, nothing gets indexed, so I guess it needs to be used in different form. Code below creates index, type and index data from CSV file in chunks of 500 elemens into elasticsearch. It is working properly but I am wandering is it possible to increse prerformance. That's why I want to try out streaming_bulk function. Currently I need 10 minutes to index 1 million rows for CSV document of 200MB. I use

What's a clean, standard way to get multiple transactions in an EJB?

ぃ、小莉子 提交于 2019-12-01 09:38:41
问题 I have a batchEdit(List<E> entity) that calls an edit(E entity) function in a loop, while each edit() has it's own transaction so that failed edits don't rollback the good edits. I currently have it implemented like so: Option 1 @Stateless @TransactionManagement( value = TransactionManagementType.CONTAINER ) public class Service<E> { @Resource private SessionContext context; @Override @TransactionAttribute( value = TransactionAttributeType.REQUIRES_NEW ) public E edit( E entity ) { //edit

MS SQL Server - Bulk Insert Across a Network

此生再无相见时 提交于 2019-12-01 07:38:05
I have an application that uses MS SQL Server for which I'll need to do a bulk insert from a file. The sticking point is that the database and my application will be hosted on separate servers. What is the best way to do a bulk insert across a network? Two ideas I'd come up with so far: From the app server, share a directory that the db server can find, and do the import using a bulk insert statement from the remote file Run an FTP server from the db server - when the import is performed, simply ftp the file to the db server and do the import using a bulk insert from the local file (I am

MS SQL Server - Bulk Insert Across a Network

半腔热情 提交于 2019-12-01 04:56:42
问题 I have an application that uses MS SQL Server for which I'll need to do a bulk insert from a file. The sticking point is that the database and my application will be hosted on separate servers. What is the best way to do a bulk insert across a network? Two ideas I'd come up with so far: From the app server, share a directory that the db server can find, and do the import using a bulk insert statement from the remote file Run an FTP server from the db server - when the import is performed,

How can I easily bulk rename files with Perl?

99封情书 提交于 2019-11-30 21:40:18
I have a lot of files I'm trying to rename, I tried to make a regular expression to match them, but even that I got stuck on the files are named like: File Name 01 File Name 100 File Name 02 File Name 03 etc, I would like to add a "0" (zero), behind any of file that are less than 100, like this: File Name 001 File Name 100 File Name 002 File Name 003 The closest I got to so much as matching them was using this find -type d | sort -r | grep ' [1-9][0-9]$' however I could not figure out how to replace them. Thanks in advance for any help you can offer me. Im on CentOS if that is of any help, all

Bulk delete (truncate vs delete)

蹲街弑〆低调 提交于 2019-11-30 18:55:43
We have a table with a 150+ million records. We need to clear/delete all rows. Delete operation would take forever due to it writing to the t-logs and we cannot change our recovery model for the whole DB. We have tested the truncate table option. What we realized that truncate deallocates pages from the table, and if I am not wrong makes them available for reuse but doesn't shrink the db automatically. So, if we want to reduce the DB size, we really would need to do run the shrink db command after truncating the table. Is this normal procedure? Anything we need to be careful or aware about, or

Improve performance on sending bulk emails through spring-mail

旧城冷巷雨未停 提交于 2019-11-30 10:01:45
问题 I have a spring-stand alone application which uses simple spring email code as below , the to and the message is constructed using the values iterated from map. I have already had some suggestions for the question here , but i am in need of some specific advise for this. below is my code for (Map.Entry<String, List<values>> entry : testMap .entrySet()) { String key = entry.getKey(); StringBuilder htmlBuilder = new StringBuilder(); List<Model> valueList = entry.getValue(); for (Model value :

Building a bulk mail sender [closed]

泪湿孤枕 提交于 2019-11-30 07:41:32
I want to build an application that will allow my customers to send marketing information by e-mail. This will be a carefully monitored tool used for legitimate bulk mailing only. It's going to have all of the necessary 'unsubscribe' functionality etc. The solution will be built using VB.NET. My question relates to the best way to actually send the e-mails. We have an SMTP server in our data centre which we can use. I'm thinking we could write some kind of multi-threaded windows service to monitor a database of e-mails to send, then make calls to the System.Net.Mail API to send through this

How do you upload data in bulk to Google App Engine Datastore?

纵饮孤独 提交于 2019-11-30 07:32:06
I have about 4000 records that I need to upload to Datastore. They are currently in CSV format. I'd appreciate if someone would point me to or explain how to upload data in bulk to GAE. You can use the bulkloader.py tool: The bulkloader.py tool included with the Python SDK can upload data to your application's datastore. With just a little bit of set-up, you can create new datastore entities from CSV files. I don't have the perfect solution, but I suggest you have a go with the App Engine Console . App Engine Console is a free plugin that lets you run an interactive Python interpreter in your