问题
My problem is very similar to the one linked below except that if there was a [2,1] element I would need that to be deleted too.
Removing duplicates from a list of lists
I've tried all sorts of things but just can't make it work. Any help would be much appreciated!
Thanks.
回答1:
Maybe what you really want is a set
of set
s
unique = set(map(set, list_of_lists))
Edit: well, but that doesn't work. alas, sets cannot contain sets because sets are unhashable. frozenset
is, though:
unique = set(map(frozenset, list_of_lists))
回答2:
This works, but it doesn't preserve the ordering of the sublists:
def bygroup(k):
k = sorted(sorted(x) for x in k)
return [k for k,_ in itertools.groupby(k)]
>>> k = [[1, 2], [4], [5, 6, 2], [1, 2], [3], [4], [2, 1]]
>>> bygroup(k)
[[1, 2], [2, 5, 6], [3], [4]]
In Python 2.7 or 3.2, you could use an OrderedDict if you need to preserve the order within sublists and also the general order of the list (except for duplicates), but it's much slower:
def bydict(k):
s = collections.OrderedDict()
for i in k:
s[tuple(sorted(i))] = i
return s.values()
>>> bydict(k)
[[2, 1], [4], [5, 6, 2], [3]]
I tested with 100,000 iterations using timeit. The bydict function took about 4 times longer in Python 2.7.2 and about 3 times longer in Python 3.2.
来源:https://stackoverflow.com/questions/6724544/deleting-duplicates-from-a-list-of-lists-if-some-duplicates-do-not-have-the-same