duplicates

Reducing duplicate characters in a string to a given minimum

十年热恋 提交于 2019-12-03 20:39:08
I was messing around with the first question here: Reduce duplicate characters to a desired minimum and am looking for more elegant answers than what I came up with. It passes the test but curious to see other solutions. The sample tests are: reduceString('aaaabbbb', 2) 'aabb' reduceString('xaaabbbb', 2) 'xaabb' reduceString('aaaabbbb', 1) 'ab' reduceString('aaxxxaabbbb', 2) 'aaxxaabb' and my solution (that passes these tests): reduceString = function(str, amount) { var count = 0; var result = ''; for (var i = 0; i < str.length; i++) { if (str[i] === str[i+1]) { count++; if (count < amount) {

Removing duplicate field entries in SQL

馋奶兔 提交于 2019-12-03 20:33:45
Is there anyway I can erase all the duplicate entries from a certain table ( users )? Here is a sample of the type of entries I have. I must say the table users consists of 3 fields, ID , user , and pass . mysql_query("DELETE FROM users WHERE ???") or die(mysql_error()); randomtest randomtest randomtest nextfile baby randomtest dog anothertest randomtest baby nextfile dog anothertest randomtest randomtest I want to be able to find the duplicate entries, and then delete all of the duplicates, and leave one . You can solve it with only one query. If your table has the following structure: CREATE

iOS extensions with multiple targets

戏子无情 提交于 2019-12-03 18:44:04
问题 In iOS 8, when we create a new extension, we have to decide which target it is attached to. The extension will have the same bundle ID's prefix as the target. Is there any way to change the target afterward? If my project contains 2 (or more) targets (for example one for debug/simulator, one for production/device), what's the best way to work with extensions? Do I need to create another extension and duplicate the code (very bothersome to keep the same code for both targets)? 回答1: To share

Only select first row of repeating value in a column in SQL

为君一笑 提交于 2019-12-03 17:18:01
问题 I have table that has a column that may have same values in a burst. Like this: +----+---------+ | id | Col1 | +----+---------+ | 1 | 6050000 | +----+---------+ | 2 | 6050000 | +----+---------+ | 3 | 6050000 | +----+---------+ | 4 | 6060000 | +----+---------+ | 5 | 6060000 | +----+---------+ | 6 | 6060000 | +----+---------+ | 7 | 6060000 | +----+---------+ | 8 | 6060000 | +----+---------+ | 9 | 6050000 | +----+---------+ | 10 | 6000000 | +----+---------+ | 11 | 6000000 | +----+---------+ Now

Rails clone copy or duplicate

一世执手 提交于 2019-12-03 16:59:08
问题 I have a nested form and once I save, I want to be able to click a link on the show page to copy or clone that form and open a new one. From there I should be able to make edits (like a new id) and save as a new record. I have seen some examples like this deep_cloneable gem, but I have no idea how to implement it. I think this should be simple, but I just don't understand where to put things in the controller and in the show view. 回答1: If you want to copy an activeRecord object you can use

Detecting duplicate files

丶灬走出姿态 提交于 2019-12-03 14:47:03
问题 I'd like to detect duplicate files in a directory tree. When two identical files are found only one of the duplicates will be preserved and the remaining duplicates will be deleted to save the disk space. The duplicate means files having the same content which may differ in file names and path. I was thinking about using hash algorithms for this purpose but there is a chance that different files have the same hashes, so I need some additional mechanism to tell me that the files aren't the

How to eliminate duplicate list entries in Python while preserving case-sensitivity?

自作多情 提交于 2019-12-03 14:38:42
I'm looking for a way to remove duplicate entries from a Python list but with a twist; The final list has to be case sensitive with a preference of uppercase words. For example, between cup and Cup I only need to keep Cup and not cup . Unlike other common solutions which suggest using lower() first, I'd prefer to maintain the string's case here and in particular I'd prefer keeping the one with the uppercase letter over the one which is lowercase.. Again, I am trying to turn this list: [Hello, hello, world, world, poland, Poland] into this: [Hello, world, Poland] How should I do that? Thanks in

MySQL on duplicate key update

蹲街弑〆低调 提交于 2019-12-03 13:59:18
If I have query like this, how can I refer to values I have already given in update statement, so that I don't need to insert same data to query again? Example I would like to update col1 value with 'xxx', but now I need to enter 'xxx' again in duplicate statement. Is there anyway to refer those values in duplicate statement? INSERT INTO TABLENAME(col1, col2) VALUES (’xxx’, ‘yyy’) ON DUPLICATE KEY UPDATE col1 = ‘zzz’ Joshua Martell This should work and is a little more elegant: INSERT INTO TABLENAME(col1, col2) VALUES (’xxx’, ‘yyy’) ON DUPLICATE KEY UPDATE col1 = VALUES(col1) Note that you don

How to find and remove duplicate objects in a collection using LINQ?

雨燕双飞 提交于 2019-12-03 13:16:30
I have a simple class representing an object. It has 5 properties (a date, 2 decimals, an integer and a string). I have a collection class, derived from CollectionBase , which is a container class for holding multiple objects from my first class. My question is, I want to remove duplicate objects (e.g. objects that have the same date, same decimals, same integers and same string). Is there a LINQ query I can write to find and remove duplicates? Or find them at the very least? You can remove duplicates using the Distinct operator. There are two overloads - one uses the default equality comparer

How to find duplicate filenames (recursively) in a given directory? BASH

隐身守侯 提交于 2019-12-03 13:03:29
问题 I need to find every duplicate filenames in a given dir tree. I dont know, what dir tree user will give as a script argument, so I dont know the directory hierarchy. I tried this: #!/bin/sh find -type f | while IFS= read vo do echo `basename "$vo"` done but thats not really what I want. It finds only one duplicate and then ends, even, if there are more duplicate filenames, also - it doesnt print a whole path (prints only a filename) and duplicate count. I wanted to do something similar to