duplicates

Both dup and clone return different objects, but modifying them alters the original object

元气小坏坏 提交于 2019-12-05 05:52:57
I have an array of values that I use as a reference for order when I'm printing out hash values. I'd like to modify the array so that the array values are "prettier". I figured I'd just dup or clone the array, change the values and the original object would remain unchanaged. However (in irb)... @arr = ['stuff', 'things'] a = @arr.clone b = @arr.dup So, at the very least, a and @arr are different objects: a.object_id == @arr.object_id => false But now it gets strange a[0].capitalize! a => ['Stuff', 'things'] @arr => ['Stuff', 'things'] ##<-what? b => ['Stuff', 'things']## <-what??? ok... so

R remove duplicate elements in character vector, not duplicate rows

狂风中的少年 提交于 2019-12-05 04:32:27
I am hitting a brick wall with this problem. I have a data frame (dates) with some document ids and dates stored in a character vector: Doc Dates 1 12345 c("06/01/2000","08/09/2002") 2 23456 c("07/01/2000", 09/08/2003", "07/01/2000") 3 34567 c("09/06/2004", "09/06/2004", "12/30/2006") 4 45678 c("06/01/2000","08/09/2002") I am trying to remove the duplicate elements in the Dates to get this result: Doc Dates 1 12345 c("06/01/2000","08/09/2002") 2 23456 c("07/01/2000", 09/08/2003") 3 34567 c("09/06/2004", "12/30/2006") 4 45678 c("06/01/2000","08/09/2002") I have tried: R>unique(dates$dates) but

Removing duplicate field entries in SQL

淺唱寂寞╮ 提交于 2019-12-05 04:29:59
问题 Is there anyway I can erase all the duplicate entries from a certain table ( users )? Here is a sample of the type of entries I have. I must say the table users consists of 3 fields, ID , user , and pass . mysql_query("DELETE FROM users WHERE ???") or die(mysql_error()); randomtest randomtest randomtest nextfile baby randomtest dog anothertest randomtest baby nextfile dog anothertest randomtest randomtest I want to be able to find the duplicate entries, and then delete all of the duplicates,

Reducing duplicate characters in a string to a given minimum

陌路散爱 提交于 2019-12-05 04:19:37
问题 I was messing around with the first question here: Reduce duplicate characters to a desired minimum and am looking for more elegant answers than what I came up with. It passes the test but curious to see other solutions. The sample tests are: reduceString('aaaabbbb', 2) 'aabb' reduceString('xaaabbbb', 2) 'xaabb' reduceString('aaaabbbb', 1) 'ab' reduceString('aaxxxaabbbb', 2) 'aaxxaabb' and my solution (that passes these tests): reduceString = function(str, amount) { var count = 0; var result

inline function in namespace generate duplicate symbols during link on gcc

这一生的挚爱 提交于 2019-12-05 03:38:29
I have a namespace with inline function that will be used if several source files. When trying to link my application, the inline function are reported as duplicate symbols. It seems as if my code would simply not inline the functions and I was wondering if this is the expected behavior and how to best deal with it. I use the following gcc options: -g -Wextra -pedantic -Wmissing-field-initializers -Wredundant-decls -Wfloat-equal -Wno-reorder -Wno-long-long The same code style seems to compile and link properly when build in a VC7 environment. The following code example shows the structure of

Using Linq to find duplicates but get the whole record

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-05 03:23:16
So I am using this code var duplicates = mg.GroupBy(i => new { i.addr1, i.addr2 }) .Where(g => g.Count() > 1) .Select(g=>g.Key); GridView1.DataSource = duplicates; GridView1.DataBind(); to find and list the duplicates in a table based on addr1 and addr2. The only problem with this code is that it only gives me the pair of addr1 and addr2 that are duplicates when i actually want to display all the fields of the records. ( all the fields like ID, addr1, addr2, city, state...) Any ideas ? To get all values, you can use ToList() on IGrouping var duplicates = mg.GroupBy(i => new { i.addr1, i.addr2

removing duplicates of a list of sets

痴心易碎 提交于 2019-12-05 00:29:06
I have a list of sets : L = [set([1, 4]), set([1, 4]), set([1, 2]), set([1, 2]), set([2, 4]), set([2, 4]), set([5, 6]), set([5, 6]), set([3, 6]), set([3, 6]), set([3, 5]), set([3, 5])] (actually in my case a conversion of a list of reciprocal tuples) and I want to remove duplicates to get : L = [set([1, 4]), set([1, 2]), set([2, 4]), set([5, 6]), set([3, 6]), set([3, 5])] But if I try : >>> list(set(L)) TypeError: unhashable type: 'set' Or >>> list(np.unique(L)) TypeError: cannot compare sets using cmp() How do I get a list of sets with distinct sets? The best way is to convert your sets to

How to eliminate duplicate list entries in Python while preserving case-sensitivity?

 ̄綄美尐妖づ 提交于 2019-12-05 00:09:54
问题 I'm looking for a way to remove duplicate entries from a Python list but with a twist; The final list has to be case sensitive with a preference of uppercase words. For example, between cup and Cup I only need to keep Cup and not cup . Unlike other common solutions which suggest using lower() first, I'd prefer to maintain the string's case here and in particular I'd prefer keeping the one with the uppercase letter over the one which is lowercase.. Again, I am trying to turn this list: [Hello,

How to compare 2 lists and merge them in Python/MySQL?

橙三吉。 提交于 2019-12-04 21:18:25
I want to merge data. Following are my MySQL tables. I want to use Python to traverse though a list of both Lists (one with dupe = 'x' and other with null dupes). This is sample data. Actual data is humongous. For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 What I have so far : import string, os, sys import MySQLdb from EncryptedFile import EncryptedFile enc = EncryptedFile( os.getenv("HOME") + '/.py-encrypted-file

Binary search if array contains duplicates

随声附和 提交于 2019-12-04 21:14:57
Hi, what is the index of the search key if we search for 24 in the following array using binary search. array = [10,20,21,24,24,24,24,24,30,40,45] I have a doubt regarding binary search that how does it works if a array has duplicate values.Can anybody clarify... The array you proposed has the target value in the middle index, and in the most efficient implementations will return this value before the first level of recursion. This implementation would return '5' (the middle index). To understand the algorithm, just step through the code in a debugger. public class BinarySearch { public static