duplicates

Copy value N times in Excel

和自甴很熟 提交于 2019-11-28 20:57:54
I have simple list: A B item1 3 item2 2 item3 4 item4 1 Need to output: A item1 item1 item1 item2 item2 item3 item3 item3 item3 item4 flodel Here is one way of doing it without VBA: Insert a column to the left of A, so your current A and B columns are now B and C. Put 1 in A1 Put =A1+C1 in A2 and copy down to A5 Put an empty string in B5, by just entering a single quote ( ' ) in the cell Put a 1 in E1, a 2 in E2, and copy down as to get 1, 2, ..., 10 Put =VLOOKUP(E1,$A$1:$B$5,2) in F1 and copy down. It should look like this: | A | B | C | D | E | F | |----|-------|---|---|----|-------| | 1 |

Avoiding duplicated messages on JMS/ActiveMQ

泪湿孤枕 提交于 2019-11-28 20:29:32
Is there a way to suppress duplicated messages on a queue defined on ActiveMQ server? I tried to define manually JMSMessageID, (message.setJMSMessageID("uniqueid")), but server ignores this modification and deliver a message with built-in generated JMSMessageID. By specification, I didn't found a reference about how to deduplicate messages. In HornetQ, to deal with this problem, we need to declare the HQ specific property org.hornetq.core.message.impl.HDR_DUPLICATE_DETECTION_ID on message definition. i.e.: Message jmsMessage = session.createMessage(); String myUniqueID = "This is my unique id"

How to check for duplicate CSS rules?

拜拜、爱过 提交于 2019-11-28 20:18:11
I messed up my css and somehow i have a lot of the duplicate rules and my 1800 something lines css file is now of 3000+ lines.. Is there any way/tool that would take my css file as input and check for all the duplicate rules? and possibly generate a css removing those redundancies? Akshay Vijay Jain Install Node JS https://nodejs.org/en/download/ if you have already node js installed or after installing open node command prompt window by typing node( on windows machine) in start. Type following command to install css purge tool npm install css-purge -g After the tool is installed, Open the

Finding an element in an array that isn't repeated a multiple of three times?

﹥>﹥吖頭↗ 提交于 2019-11-28 19:36:13
After reading this interesting question I was reminded of a tricky interview question I had once that I never satisfactorily answered: You are given an array of n 32-bit unsigned integers where each element (except one) is repeated a multiple of three times. In O(n) time and using as little auxiliary space as possible, find the element of the array that does not appear a multiple of three times. As an example, given this array: 1 1 2 2 2 3 3 3 3 3 3 We would output 1, while given the array 3 2 1 3 2 1 2 3 1 4 4 4 4 We would output 4. This can easily be solved in O(n) time and O(n) space by

Deleting duplicate lines in a file using Java

天大地大妈咪最大 提交于 2019-11-28 19:15:38
As part of a project I'm working on, I'd like to clean up a file I generate of duplicate line entries. These duplicates often won't occur near each other, however. I came up with a method of doing so in Java (which basically made a copy of the file, then used a nested while-statement to compare each line in one file with the rest of the other). The problem, is that my generated file is pretty big and text heavy (about 225k lines of text, and around 40 megs). I estimate my current process to take 63 hours! This is definitely not acceptable. I need an integrated solution for this, however.

how to prevent adding duplicate keys to a javascript array

非 Y 不嫁゛ 提交于 2019-11-28 18:45:54
问题 I found a lot of related questions with answers talking about for...in loops and using hasOwnProperty but nothing I do works properly. All I want to do is check whether or not a key exists in an array and if not, add it. I start with an empty array then add keys as the page is scrubbed with jQuery. Initially, I hoped that something simple like the following would work: (using generic names) if (!array[key]) array[key] = value; No go. Followed it up with: for (var in array) { if (!array

Duplicate rows in pandas DF

做~自己de王妃 提交于 2019-11-28 18:28:03
I have a DF in Pandas, which looks like: Letters Numbers A 1 A 3 A 2 A 1 B 1 B 2 B 3 C 2 C 2 I'm looking to count the number of similar rows and save the result in a third column. For example, the output I'm looking for: Letters Numbers Events A 1 2 A 2 1 A 3 1 B 1 1 B 2 1 B 3 1 C 2 2 An example of what I'm looking to do is here . The best idea I've come up with is to use count_values() , but I think this is just for one column. Another idea is to use duplicated() , anyway I don't want construct any for -loop. I'm pretty sure, that a Pythonic alternative to a for loop exists. You can groupby

SQL Server 2008: delete duplicate rows

谁都会走 提交于 2019-11-28 18:02:40
I have duplicate rows in my table, how can I delete them based on a single column's value? Eg uniqueid, col2, col3 ... 1, john, simpson 2, sally, roberts 1, johnny, simpson delete any duplicate uniqueIds to get 1, John, Simpson 2, Sally, Roberts You can DELETE from a cte: WITH cte AS (SELECT *,ROW_NUMBER() OVER(PARTITION BY uniqueid ORDER BY col2)'RowRank' FROM Table) DELETE FROM cte WHERE RowRank > 1 The ROW_NUMBER() function assigns a number to each row. PARTITION BY is used to start the numbering over for each item in that group, in this case each value of uniqueid will start numbering at 1

Tools for matching name/address data [closed]

霸气de小男生 提交于 2019-11-28 17:19:20
Here's an interesting problem. I have an oracle database with name & address information which needs to be kept current. We get data feeds from a number of different gov't sources, and need to figure out matches, and whether or not to update the db with the data, or if a new record needs to be created. There isn't any sort of unique identifier that can be used to tie records together, and the data quality isn't always that good - there will always be typos, people using different names (i.e. Joe vs. Joseph), etc. I'd be interested in hearing from anyone who's worked on this type of problem

Determining duplicate values in an array

微笑、不失礼 提交于 2019-11-28 16:20:34
Suppose I have an array a = np.array([1, 2, 1, 3, 3, 3, 0]) How can I (efficiently, Pythonically) find which elements of a are duplicates (i.e., non-unique values)? In this case the result would be array([1, 3, 3]) or possibly array([1, 3]) if efficient. I've come up with a few methods that appear to work: Masking m = np.zeros_like(a, dtype=bool) m[np.unique(a, return_index=True)[1]] = True a[~m] Set operations a[~np.in1d(np.arange(len(a)), np.unique(a, return_index=True)[1], assume_unique=True)] This one is cute but probably illegal (as a isn't actually unique): np.setxor1d(a, np.unique(a),