duplicates

Android: How to save contacts to sdcard as vCard. Without duplicates?

被刻印的时光 ゝ 提交于 2019-12-04 04:18:38
问题 I am trying to save all of the contacts on a phone to the sdcard as a .vcf file (vCard). It works, but I have a problem. Every contact that has more than one phone number (a mobile and work number) are saved twice. And both of the numbers are in each duplicate contact, so they are correct, just duplicated. Can someone please tell me how to fix this problem? My code is: File delete=new File(Environment.getExternalStorageDirectory()+"/Contacts.vcf"); if (delete.exists()) { delete.delete(); }

How can I find indices of each row of a matrix which has a duplicate in matlab?

拈花ヽ惹草 提交于 2019-12-04 04:06:57
I want to find the indices all the rows of a matrix which have duplicates. For example A = [1 2 3 4 1 2 3 4 2 3 4 5 1 2 3 4 6 5 4 3] The vector to be returned would be [1,2,4] A lot of similar questions suggest using the unique function, which I've tried but the closest I can get to what I want is: [C, ia, ic] = unique(A, 'rows') ia = [1 3 5] m = 5; setdiff(1:m,ia) = [2,4] But using unique I can only extract the 2nd,3rd,4th...etc instance of a row, and I need to also obtain the first. Is there any way I can do this? NB: It must be a method which doesn't involve looping through the rows, as I'm

matlab: remove duplicate values

坚强是说给别人听的谎言 提交于 2019-12-04 03:58:15
问题 I'm fairly new to programming in general and MATLAB and I'm having some problems with removing values from matrix. I have matrix tmp2 with values: tmp2 = [... ... 0.6000 20.4000 0.7000 20.4000 0.8000 20.4000 0.9000 20.4000 1.0000 20.4000 1.0000 19.1000 1.1000 19.1000 1.2000 19.1000 1.3000 19.1000 1.4000 19.1000 ... ...]; How to remove the part where on the left column there is 1.0 but the values on the right one are different? I want to save the row with 19.1. I searched for solutions but

crunch/resource packaging with aapt in ant build uses cache from other projects

浪子不回头ぞ 提交于 2019-12-04 03:43:40
问题 I have two android apps using a common library. Each project defines its own background images for the splash screen and a few others. These images have the same names in both apps. When I build/run from eclipse, each app uses the correct background images. However, when I run my ant build file, the contents are mixed up when packaging resources and the same images are used for both applications. I am sure there is a cache somewhere that I need to clear but I can't find it (running on MacOSX

Convert JSON object with duplicate keys to JSON array

邮差的信 提交于 2019-12-04 03:40:53
问题 I have a JSON string that I get from a database which contains repeated keys. I want to remove the repeated keys by combining their values into an array. For example Input { "a":"b", "c":"d", "c":"e", "f":"g" } Output { "a":"b", "c":["d","e"], "f":"g" } The actual data is a large file that may be nested. I will not know ahead of time what or how many pairs there are. I need to use Java for this. org.json throws an exception because of the repeated keys, gson can parse the string but each

Pivot duplicates rows into new columns Pandas

▼魔方 西西 提交于 2019-12-04 03:40:19
问题 I have a data frame like this and I'm trying reshape my data frame using Pivot from Pandas in a way that I can keep some values from the original rows while making the duplicates row into columns and renaming them. Sometimes I have rows with 5 duplicates I have been trying, but I don't get it. import pandas as pd df = pd.read_csv("C:dummy") df = df.pivot(index=["ID"], columns=["Zone","PTC"], values=["Zone","PTC"]) # Rename columns and reset the index. df.columns = [["PTC{}","Zone{}"],.format

Why doesn't this rule prevent duplicate key violations?

自古美人都是妖i 提交于 2019-12-04 03:36:51
(postgresql) I was trying to COPY csv data into a table but I was getting duplicate key violation errors, and there's no way to tell COPY to ignore those, so following internet wisdom I tried adding this rule: CREATE OR REPLACE RULE ignore_duplicate_inserts AS ON INSERT TO mytable WHERE (EXISTS ( SELECT mytable.id FROM mytable WHERE mytable.id = new.id)) DO NOTHING; to circumvent the problem, but I still get those errors - any ideas why ? Rules by default add things to the current action : Roughly speaking, a rule causes additional commands to be executed when a given command on a given table

How to realize when a browser tab has been duplicated

我们两清 提交于 2019-12-04 03:32:37
I'm having problems with a duplicate tab on Chrome (session's stuff) and I'd like to avoid the action of duplicating tabs (or lacking that close the duplicate one). I'm opening the tab as it was a popup, with no address bar, no status bar, and no nothing, just the window. There's no way to duplicate a tab (opened as a popup) in IE and Firefox (at least I havent found one), but in chrome is still possible. I also know I'm not able to programmatically check if there's an already open duplicated tab Any idea how to approach this? thanks! JosiahDaniels Goal Just to clarify: The goal is to is to

How to remove duplicates from a file and write to the same file?

六月ゝ 毕业季﹏ 提交于 2019-12-04 03:20:19
I know my title is not much self-explanatory but let me try to explain it here. I have a file name test.txt which has some duplicate lines. Now, what I want to do is remove those duplicate lines and at the same time update test.txt with the new content. test.txt AAAA BBBB AAAA CCCC I know I can use sort -u test.txt to remove the duplicates but to update the file with new content how do I redirect it's output to the same file. The below command doesn't work. sort -u test.txt > test.txt So, why the above command is not working and whats the correct way? Also is there any other way like sort_and

How to conditionally remove duplicates from a pandas dataframe

99封情书 提交于 2019-12-04 03:18:16
Consider the following dataframe import pandas as pd df = pd.DataFrame({'A' : [1, 2, 3, 3, 4, 4, 5, 6, 7], 'B' : ['a','b','c','c','d','d','e','f','g'], 'Col_1' :[np.NaN, 'A','A', np.NaN, 'B', np.NaN, 'B', np.NaN, np.NaN], 'Col_2' :[2,2,3,3,3,3,4,4,5]}) df Out[92]: A B Col_1 Col_2 0 1 a NaN 2 1 2 b A 2 2 3 c A 3 3 3 c NaN 3 4 4 d B 3 5 4 d NaN 3 6 5 e B 4 7 6 f NaN 4 8 7 g NaN 5 I want to remove all rows which are duplicates with regards to column 'A' 'B' . I want to remove the entry which has a NaN entry (I know that for all dulicates there will be a NaN and a not- NaN entry). The end results