duplicates

Finding duplicate entries in Collection

被刻印的时光 ゝ 提交于 2019-12-21 08:05:59
问题 Is there a tool or library to find duplicate entries in a Collection according to specific criteria that can be implemented? To make myself clear: I want to compare the entries to each other according to specific criteria. So I think a Predicate returning just true or false isn't enough. I can't use equals . 回答1: It depends on the semantic of the criterion: If your criterion is always the same for a given class, and is inherent to the underlying concept , you should just implement equals and

Duplicated icon issue with Twitter Bootstrap and Font Awesome

北慕城南 提交于 2019-12-21 07:56:22
问题 I am having an issue with this menu with icons using bootstrap and font awesome, both in less format and being compiled at runtime with JavaScript. Both black and blue ones are showing up at the same time! The code: <div class="well sidebar-nav"> <ul class="nav nav-list"> <li class="nav-header">Relatórios</li> <li><a href="#"><i class="icon-facebook-sign"></i> Acessos na s-Commerce</a></li> <li><a href="#"><i class="icon-shopping-cart"></i> Acessos para a loja</a></li> </ul> </div> Browser

rbind data frames, duplicated rownames issue

人走茶凉 提交于 2019-12-21 05:31:13
问题 While duplicated row (and column) names are allowed in a matrix , they are not allowed in a data.frame . Trying to rbind() some data frames having row names in common highlights this problem. Consider two data frames below: foo = data.frame(a=1:3, b=5:7) rownames(foo)=c("w","x","y") bar = data.frame(a=c(2,4), b=c(6,8)) rownames(bar)=c("x","z") # foo bar # a b a b # w 1 5 x 2 6 # x 2 6 y 4 8 # y 3 7 Now trying to rbind() them (Pay attention to the row names): rbind(foo, bar) # a b # w 1 5 # x

Ordering PHP arrays based on duplicate values

痴心易碎 提交于 2019-12-21 05:20:08
问题 I have an array which contains duplicate values. I want to sort the array so that the values with the most duplicates appear first in line. Here's an example of my array: array(1, 2, 3, 2, 1, 2, 2); I want to sort this array so that it orders itself based on the amount of duplicates into the following: array(2, 1, 3); '2' has the most duplicates so it's sorted first, followed by values will less duplicates. Does anyone know how I can accomplish this? 回答1: $acv=array_count_values($array); // 1

Django: Filtering on the related object, removing duplicates from the result

南笙酒味 提交于 2019-12-21 04:10:51
问题 Given the following models: class Blog(models.Model): name = models.CharField() class Entry(models.Model): blog = models.ForeignKey(Blog) content = models.CharField() I am looking to pass the following to a template: blogs = Blog.objects.filter(entry__content__contains = 'foo') result = [(blog, blog.entry_set.filter(content__contains = 'foo')) for blog in blogs] render_to_response('my.tmpl', {'result': result} However, "Blog.objects.filter(...)" returns the same Blog object multiple times if

Eliminate duplicate rows in a PostgreSQL SELECT statement

寵の児 提交于 2019-12-21 03:21:14
问题 This is my query: SELECT autor.entwickler,anwendung.name FROM autor left join anwendung on anwendung.name = autor.anwendung; entwickler | name ------------+------------- Benutzer 1 | Anwendung 1 Benutzer 2 | Anwendung 1 Benutzer 2 | Anwendung 2 Benutzer 1 | Anwendung 3 Benutzer 1 | Anwendung 4 Benutzer 2 | Anwendung 4 (6 rows) I want to keep one row for each distinct value in the field name , and discard the others like this: entwickler | name ------------+------------- Benutzer 1 | Anwendung

Best algorithm for delete duplicates in array of strings

十年热恋 提交于 2019-12-21 01:16:10
问题 Today at school the teacher asked us to implement a duplicate-deletion algorithm. It's not that difficult, and everyone came up with the following solution (pseudocode): for i from 1 to n - 1 for j from i + 1 to n if v[i] == v[j] then remove(v, v[j]) // remove(from, what) next j next i The computational complexity for this algo is n(n-1)/2 . (We're in high school, and we haven't talked about big-O, but it seems to be O(n^2) ). This solution appears ugly and, of course, slow, so I tried to

Rounding milliseconds of POSIXct in data.table v1.9.2 (ok in 1.8.10)

一笑奈何 提交于 2019-12-20 21:51:12
问题 I have a weird result for my data.table v1.9.2 : DT timestamp 1: 2013-01-01 17:51:00.707 2: 2013-01-01 17:51:59.996 3: 2013-01-01 17:52:00.059 4: 2013-01-01 17:54:23.901 5: 2013-01-01 17:54:23.914 str(DT) Classes ‘data.table’ and 'data.frame': 5 obs. of 1 variable: $ timestamp: POSIXct, format: "2013-01-01 17:51:00.707" "2013-01-01 17:51:59.996" "2013-01-01 17:52:00.059" "2013-01-01 17:54:23.901" ... - attr(*, "sorted")= chr "timestamp" - attr(*, ".internal.selfref")=<externalptr> When I

generic code duplication detection tool

久未见 提交于 2019-12-20 17:16:09
问题 I'm looking for a code duplication tool that is language agnostic. It's easy to find language specific code duplication tools (for Java, C, PHP, ...), but I'd like to run some code duplication analysis on a templates in a custom syntax. I don't care about advanced parsing of the syntax, just straight line based raw string comparison is fine. Whitespace insensitive matching would be a plus, but not required. (It's not that hard to normalize/eliminate whitespace myself.) Does anybody know a

How to find duplicate records in PostgreSQL

不羁的心 提交于 2019-12-20 07:57:21
问题 I have a PostgreSQL database table called "user_links" which currently allows the following duplicate fields: year, user_id, sid, cid The unique constraint is currently the first field called "id", however I am now looking to add a constraint to make sure the year , user_id , sid and cid are all unique but I cannot apply the constraint because duplicate values already exist which violate this constraint. Is there a way to find all duplicates? 回答1: The basic idea will be using a nested query