Pythonic way of removing reversed duplicates in list

后端 未结 9 2416
难免孤独
难免孤独 2020-12-06 09:49

I have a list of pairs:

[0, 1], [0, 4], [1, 0], [1, 4], [4, 0], [4, 1]

and I want to remove any duplicates where

[a,b] == [         


        
9条回答
  •  广开言路
    2020-12-06 10:17

    TL;DR

    set(map(frozenset, lst))
    

    Explanation

    If the pairs are logically unordered, they're more naturally expressed as sets. It would be better to have them as sets before you even get to this point, but you can convert them like this:

    lst = [[0, 1], [0, 4], [1, 0], [1, 4], [4, 0], [4, 1]]
    lst_as_sets = map(frozenset, lst)
    

    And then the natural way of eliminating duplicates in an iterable is to convert it to a set:

    deduped = set(lst_as_sets)
    

    (This is the main reason I chose frozenset in the first step. Mutable sets are not hashable, so they can't be added to a set.)

    Or you can do it in a single line like in the TL;DR section.

    I think this is much simpler, more intuitive, and more closely matches how you think about the data than fussing with sorting and tuples.

    Converting back

    If for some reason you really need a list of lists as the final result, converting back is trivial:

    result_list = list(map(list, deduped))
    

    But it's probably more logical to leave it all as sets as long as possible. I can only think of one reason that you might need this, and that's compatibility with existing code/libraries.

提交回复
热议问题