duplicates

Remove multiple characters from a list if they are next to each other in Scheme

て烟熏妆下的殇ゞ 提交于 2019-12-01 20:01:42
I have to make a Dr. Racket program that removes letters from a list if they are following the same letter as itself. For example: (z z f a b b d d) would become (z f a b d). I have written code for this but all it does is remove the first letter from the list. Can anyone help? #lang racket (define (remove-duplicates x) (cond ((null? x) '()) ((member (car x) (cons(car(cdr x)) '()))) (remove-duplicates (cdr x)) (else (cons (car x) (remove-duplicates (cdr x)))))) (define x '( b c c d d a a)) (remove-duplicates x) (define (remove-dups x) (cond [(empty? x) '()] [(empty? (cdr x)) (list (car x))] [

crunch/resource packaging with aapt in ant build uses cache from other projects

半世苍凉 提交于 2019-12-01 20:00:56
I have two android apps using a common library. Each project defines its own background images for the splash screen and a few others. These images have the same names in both apps. When I build/run from eclipse, each app uses the correct background images. However, when I run my ant build file, the contents are mixed up when packaging resources and the same images are used for both applications. I am sure there is a cache somewhere that I need to clear but I can't find it (running on MacOSX Lion). I tried the -f option of appt , but still the same problem. Anybody knows how to fix this? run

Duplicate documents on _id (in mongo)

北城以北 提交于 2019-12-01 19:08:59
I have a sharded mongo collection, with over 1.5 mil documents. I use the _id column as a shard key, and the values in this column are integers (rather than ObjectIds). I do a lot of write operations on this collection, using the Perl driver (insert, update, remove, save) and mongoimport. My problem is that somehow, I have duplicate documents on the same _id. From what I've read, this shouldn't be possible. I've removed the duplicates, but others still appear. Do you have any ideas where could they come from, or what should I start looking at? (Also, I've tried to replicate this on a smaller,

Finding duplicates in a list, including permutations

眉间皱痕 提交于 2019-12-01 19:03:32
I would like to determine whether a list contains any duplicate elements, while considering permutations as equivalent. All vectors are of equal length. What is the most efficient way (shortest running time) to accomplish this? ## SAMPLE DATA a <- c(1, 2, 3) b <- c(4, 5, 6) a.same <- c(3, 1, 2) ## BOTH OF THSE LISTS SHOULD BE FLAGGED AS HAVING DUPLICATES myList1 <- list(a, b, a) myList2 <- list(a, b, a.same) # CHECK FOR DUPLICATES anyDuplicated(myList1) > 0 # TRUE anyDuplicated(myList2) > 0 # FALSE, but would like true. For now I am resorting to sorting each member of the list before checking

Pandas find Duplicates in cross values

百般思念 提交于 2019-12-01 18:46:26
问题 I have a dataframe and want to eliminate duplicate rows, that have same values, but in different columns: df = pd.DataFrame(columns=['a','b','c','d'], index=['1','2','3']) df.loc['1'] = pd.Series({'a':'x','b':'y','c':'e','d':'f'}) df.loc['2'] = pd.Series({'a':'e','b':'f','c':'x','d':'y'}) df.loc['3'] = pd.Series({'a':'w','b':'v','c':'s','d':'t'}) df Out[8]: a b c d 1 x y e f 2 e f x y 3 w v s t Rows [1],[2] have the values {x,y,e,f}, but they are arranged in a cross - i.e. if you would

Listview duplicates item every 6 times

偶尔善良 提交于 2019-12-01 18:35:09
Hope everyone's good; I know this issue was reviewed earlier couple of times but after a long search I still didn't find a solution. My custom listview duplicates items every 6 item. Already checked and tried: 1- layout_width and layout_height doesn't contain wrap_content 2- holder = new ListViewItem() is before any initialization of contents 3- There is a "convertView != null" 4- holder.linearLayout.getChild() can't be use in my case because the layout isn't Linear 5- clear() If anyone can help me this is my codes getView() of CustomListViewAdapter.java public View getView(final int position,

Pivot duplicates rows into new columns Pandas

放肆的年华 提交于 2019-12-01 18:29:27
I have a data frame like this and I'm trying reshape my data frame using Pivot from Pandas in a way that I can keep some values from the original rows while making the duplicates row into columns and renaming them. Sometimes I have rows with 5 duplicates I have been trying, but I don't get it. import pandas as pd df = pd.read_csv("C:dummy") df = df.pivot(index=["ID"], columns=["Zone","PTC"], values=["Zone","PTC"]) # Rename columns and reset the index. df.columns = [["PTC{}","Zone{}"],.format(c) for c in df.columns] df.reset_index(inplace=True) # Drop duplicates df.drop(["PTC","Zone"], axis=1,

matlab: remove duplicate values

旧巷老猫 提交于 2019-12-01 18:26:54
I'm fairly new to programming in general and MATLAB and I'm having some problems with removing values from matrix. I have matrix tmp2 with values: tmp2 = [... ... 0.6000 20.4000 0.7000 20.4000 0.8000 20.4000 0.9000 20.4000 1.0000 20.4000 1.0000 19.1000 1.1000 19.1000 1.2000 19.1000 1.3000 19.1000 1.4000 19.1000 ... ...]; How to remove the part where on the left column there is 1.0 but the values on the right one are different? I want to save the row with 19.1. I searched for solutions but found some that delete both rows using histc function and that's not what i need. Thanks I saw the

Select all duplicate rows based on one or two columns?

孤街醉人 提交于 2019-12-01 18:13:00
问题 I Have a table named contacts with fields +-----+------------+-----------+ | id | first_name | last_name | +-----+------------+-----------+ I want to display all duplicates based on first_name and (/ or) last_name , e.g: +----+------------+-----------+ | id | first_name | last_name | +----+------------+-----------+ | 1 | mukta | chourishi | | 2 | mukta | chourishi | | 3 | mukta | john | | 4 | carl | thomas | +----+------------+-----------+ If searched on just first_name it should return: +---

Convert JSON object with duplicate keys to JSON array

时光总嘲笑我的痴心妄想 提交于 2019-12-01 18:09:10
I have a JSON string that I get from a database which contains repeated keys. I want to remove the repeated keys by combining their values into an array. For example Input { "a":"b", "c":"d", "c":"e", "f":"g" } Output { "a":"b", "c":["d","e"], "f":"g" } The actual data is a large file that may be nested. I will not know ahead of time what or how many pairs there are. I need to use Java for this. org.json throws an exception because of the repeated keys, gson can parse the string but each repeated key overwrites the last one. I need to keep all the data. If possible, I'd like to do this without