duplicates

SQL Import skip duplicates

限于喜欢 提交于 2019-12-10 10:19:16
问题 I am trying to do a bulk upload into a SQL server DB. The source file has duplicates which I want to remove, so I was hoping that the operation would automatically upload the first one, then discard the rest. (I've set a unique key constraint). Problem is, the moment a duplicate upload is attempted the whole thing fails and gets rolled back. Is there any way I can just tell SQL to keep going? 回答1: Try to bulk insert the data to the temporary table and then SELECT DISTINCT as @madcolor

R counting the occurrences of similar rows of data frame

社会主义新天地 提交于 2019-12-10 09:56:26
问题 I have data in the following format called DF (this is just a made up simplified sample): eval.num, eval.count, fitness, fitness.mean, green.h.0, green.v.0, offset.0 random 1 1 1500 1500 100 120 40 232342 2 2 1000 1250 100 120 40 11843 3 3 1250 1250 100 120 40 981340234 4 4 1000 1187.5 100 120 40 4363453 5 1 2000 2000 200 100 40 345902 6 1 3000 3000 150 90 10 943 7 1 2000 2000 90 90 100 9304358 8 2 1800 1900 90 90 100 284333 However, the eval.count column is incorrect and I need to fix it. It

local postgres db keeps giving error duplicate key value violates unique constraint

我的未来我决定 提交于 2019-12-10 09:35:58
问题 I don't understand why postgres is raising: duplicate key value violates unique constraint I went to check the table in pgadmin to see if the table really did have a duplicate and see: Running VACUUM recommended The estimated rowcount on the table deviates significantly from the actual rowcount. Why is this happening? Luckily it doesn't seem to happen in production on heroku. It's a rails app. Update: Here is the sql log: SQL (2.6ms) INSERT INTO "favorites" ("artist_id", "author_id", "created

Java - How to check for duplicate characters in a string?

泪湿孤枕 提交于 2019-12-10 09:03:35
问题 I need to write a function that checks a string for duplicate values and returns the count of unique characters. If the count is greater than 3, it should return true. If the count is less than 3, it should be false. Here is what I have been trying (notice I'm new to java) private boolean isFormatValid(String password) { CharSequence inputStr = password; int length = inputStr.length(); int numberDups = 0; for(int i=0; i < length; ++i) { Pattern pattern = Pattern.compile("(.)(?=.*?\1){1,20}");

How does Windows Azure Service Bus Queues Duplicate Detection work?

巧了我就是萌 提交于 2019-12-10 02:45:07
问题 I know that you can set duplicate detection to work over a time period with an azure service bus queue. However, does anyone know whether this works based on the objects in the queue? So if I have an object with an id of "SO_1" which gets put on the queue and is subsequently consumed, is the duplicate detection still valid? What I think I'm asking is - is it the timeframe and the object, or just the timeframe that make the queue decide what is a duplicate? 回答1: http://blog.iquestgroup.com/en

ON DUPLICATE KEY UPDATE with WHERE condition

Deadly 提交于 2019-12-09 16:51:12
问题 I update/insert values in a single table with the ON DUPLICATE KEY UPDATE function. So far everything is fine. INSERT INTO table1 SET field1=aa, field2=bb, field3=cc ON DUPLICATE KEY UPDATE SET field1=aa, field2=bb, field3=cc; But now I would like to achieve that the update only is done if a condition ( WHERE ) is true. Syntactically not correct: INSERT INTO table1 SET field1=aa, field2=bb, field3=cc ON DUPLICATE KEY UPDATE SET field1=aa, field2=bb, field3=cc WHERE field4=zz; Any ideas how

How to remove duplicates from a file and write to the same file?

余生长醉 提交于 2019-12-09 15:14:23
问题 I know my title is not much self-explanatory but let me try to explain it here. I have a file name test.txt which has some duplicate lines. Now, what I want to do is remove those duplicate lines and at the same time update test.txt with the new content. test.txt AAAA BBBB AAAA CCCC I know I can use sort -u test.txt to remove the duplicates but to update the file with new content how do I redirect it's output to the same file. The below command doesn't work. sort -u test.txt > test.txt So, why

Integrity constraint violation: 1062 Duplicate entry '1' for key 'PRIMARY'

≡放荡痞女 提交于 2019-12-09 14:59:56
问题 I have a databse problem where i get Integrity constraint violation: 1062. I tried some things on my own but it didtn work so now i am asking you guys to see if you people can help me out. elseif($action == 'add') { if($_POST['create'] == true) { $title = $_POST['txtTitle']; $txtParentCategorie = $_POST['txtParentCategorie']; $txtContent = $_POST['txtContent']; if($txtParentCategorie == "niks") { $txtParentCategorie = NULL; $chkParent = 1; $order_count = countQuery("SELECT categorieID FROM

MySQL Duplicate error with ALTER IGNORE TABLE

佐手、 提交于 2019-12-09 13:11:21
问题 I have a table in my MySQL with duplicates. I try to delete the duplicates and keep one entry. I don't have a primary key I can finde the duplicates by: select user_id, server_id, count(*) as NumDuplicates from user_server group by user_id, server_id having NumDuplicates > 1 But can't delete them with: ALTER IGNORE TABLE `user_server` ADD UNIQUE INDEX (`user_id`, `server_id`); Even SET foreign_key_checks = 0; is not working. Error Code: 1062. Duplicate entry '142-20' for key 'user_id_3' MySQL

How do I do an integer list intersection while keeping duplicates?

故事扮演 提交于 2019-12-09 10:06:59
问题 I'm working on a Greatest Common Factor and Least Common Multiple assignment and I have to list the common factors. Intersection() won't work because that removes duplicates. Contains() won't work because if it sees the int in the second list it returns all matching ints from the first list. Is there a way to do an Intersection that is not Distinct? edit: sorry for not providing an example, here is what I meant: if I have the sets: {1, 2, 2, 2, 3, 3, 4, 5} {1, 1, 2, 2, 3, 3, 3, 4, 4} I would