duplicates

Removing duplicates from a SQL query (not just “use distinct”)

╄→尐↘猪︶ㄣ 提交于 2019-11-29 06:01:00
It's probably simple, here is my query: SELECT DISTINCT U.NAME, P.PIC_ID FROM USERS U, PICTURES P, POSTINGS P1 WHERE U.EMAIL_ID = P1.EMAIL_ID AND P1.PIC_ID = P.PIC_ID AND P.CAPTION LIKE '%car%'; but this will only remove duplicates where a row has both the same u.name and p.pic_id. I want it so if there is any duplicates of the names, it just leaves out the other rows. It's a weird query, but in general, how can I apply the distinct to a single column of the SELECT clause? Arbitrarily choosing to keep the minimum PIC_ID. Also, avoid using the implicit join syntax. SELECT U.NAME, MIN(P.PIC_ID)

Check and return duplicates array php

假装没事ソ 提交于 2019-11-29 05:31:23
I would like to check if my array has any duplicates and return the duplicated values in an array. I want this to be as efficient as possible. Example :$array = array(1,2,2,4,5) function returndup($array) should return 2 ; if array is array(1,2,1,2,5); it should return an array with 1,2 Also the initial array is always 5 positions long this will be ~100 times faster than array_diff $dups = array(); foreach(array_count_values($arr) as $val => $c) if($c > 1) $dups[] = $val; You can get the difference of the original array and a copy without duplicates using array_unique and array_diff_assoc :

How can I use ON DUPLICATE KEY UPDATE in PDO with mysql?

浪子不回头ぞ 提交于 2019-11-29 05:12:42
DETAILS I am doing a single insert for the expiry of a new or renewed licence. The time period for the expiry is 2 years from the insertion date. If a duplicate is detected, the entry will be updated such that the expiry equals the remaining expiry plus 2 years. Regarding duplicates, in the example below there should only be one row containing user_id =55 and licence=commercial. TABLE: licence_expiry -------------------------------------------------------- | user_id | licence | expiry | -------------------------------------------------------- | 55 | commercial | 2013-07-04 05:13:48 | ---------

Duplicate removal

假装没事ソ 提交于 2019-11-29 05:06:08
Let's be honest, this is a hw question. The question in its entirety: Implement duplicate removal algorithm in a one-dimensional array using C++/Java in O(n) time complexity with no extra space. For example, if the input array is {3,5,5,3,7,8,5,8,9,9} then the output should be {3,5,7,8,9}. I have thought about it for quite a while and haven't been able to solve it yet. My thoughts: I could remove duplicates in O(n) if the array was sorted. But the fastest sorting algorithm I know has O(n*log(n)) complexity. One algorithm that sorts in O(n) is bin or bucket sort. The problem here is that it

Deduplicate Git forks on a server

妖精的绣舞 提交于 2019-11-29 04:36:20
Is there a way to hard-link all the duplicate objects in a folder containing multiple Git repositories? Explanation: I am hosting a Git server on my company server (Linux machine). The idea is to have a main canonical repository, to which every user doesn't have push access to, but every user forks the canonical repository (clones the canonical to the user's home directory, thereby creating hard-links actually). /canonical/Repo /Dev1/Repo (objects Hard-linked to /canonical/Repo to when initially cloned) /Dev2/Repo (objects Hard-linked to /canonical/Repo to when initially cloned) This all works

R, conditionally remove duplicate rows

主宰稳场 提交于 2019-11-29 04:22:26
I have a dataframe in R containing the columns ID.A, ID.B and DISTANCE, where distance represents the distance between ID.A and ID.B. For each value (1->n) of ID.A, there may be multiple values of ID.B and DISTANCE (i.e. there may be multiple duplicate rows in ID.A e.g. all of value 4 which each has a different ID.B and distance in that row). I would like to be able to remove rows where ID.A is duplicated, but conditional upon the distance value such that I am left with the smallest distance values for each ID.A record. Hopefully that makes sense? Many thanks in advance EDIT Hopefully an

git finding duplicate commits (by patch-id)

无人久伴 提交于 2019-11-29 04:09:08
I'd like a recipe for finding duplicated changes. patch-id is likely to be the same but the commit attributes may not be. This seems to be an intended use of patch-id: git patch-id --help IOW, you can use this thing to look for likely duplicate commits. I imagine that stringing together "git log", "git patch-id" and uniq could do the job badly but if someone has an command that does the job well, I'd appreciate it. Because the duplicate changes are likely to be not on the same branch (except when there are reverts in between them), you could use git cherry : git cherry [-v] [<upstream> [<head>

C# remove duplicates from List<List<int>>

℡╲_俬逩灬. 提交于 2019-11-29 03:35:45
I'm having trouble coming up with the most efficient algorithm to remove duplicates from List<List<int>> , for example (I know this looks like a list of int[] , but just doing it that way for visual purposes: my_list[0]= {1, 2, 3}; my_list[1]= {1, 2, 3}; my_list[2]= {9, 10, 11}; my_list[3]= {1, 2, 3}; So the output would just be new_list[0]= {1, 2, 3}; new_list[1]= {9, 10, 11}; Let me know if you have any ideas. I would really appreciate it. Build custom of EqualityComparer<List<int>> : public class CusComparer : IEqualityComparer<List<int>> { public bool Equals(List<int> x, List<int> y) {

Prevent duplicates in the database in a many-to-many relationship

泄露秘密 提交于 2019-11-29 03:30:01
问题 I'm working on a back office of a restaurant's website. When I add a dish , I can add ingredients in two ways. In my form template, I manually added a text input field. I applied on this field the autocomplete method of jQuery UI that allows: Select existing ingredients (previously added) Add new ingredients However, when I submit the form, each ingredients are inserted in the database (normal behaviour you will tell me ). For the ingredients that do not exist it is good, but I don't want to

Merge multiple data tables with duplicate column names

≡放荡痞女 提交于 2019-11-29 02:52:31
问题 I am trying to merge (join) multiple data tables (obtained with fread from 5 csv files) to form a single data table. I get an error when I try to merge 5 data tables, but works fine when I merge only 4. MWE below: # example data DT1 <- data.table(x = letters[1:6], y = 10:15) DT2 <- data.table(x = letters[1:6], y = 11:16) DT3 <- data.table(x = letters[1:6], y = 12:17) DT4 <- data.table(x = letters[1:6], y = 13:18) DT5 <- data.table(x = letters[1:6], y = 14:19) # this gives an error Reduce