问题
Let's say I have a list that looks like:
[1, 2, 2, 5, 8, 3, 3, 9, 0, 1]
Now I want to group the indices of the same elements, so the result should look like:
[[0, 9], [1, 2], [3], [4], [5, 6], [7], [8]]
How do I do this in an efficient way? I try to avoid using loops so any implementations using numpy/pandas functions are great.
回答1:
Using pandas GroupBy.apply
, this is pretty straightforward—use your data to group on a Series of indices. A nice bonus here is you get to keep the order of your indices.
data = [1, 2, 2, 5, 8, 3, 3, 9, 0, 1]
pd.Series(range(len(data))).groupby(data, sort=False).apply(list).tolist()
# [[0, 9], [1, 2], [3], [4], [5, 6], [7], [8]]
回答2:
You can use a collections.defaultdict to group indices:
from collections import defaultdict
lst = [1, 2, 2, 5, 8, 3, 3, 9, 0, 1]
d = defaultdict(list)
for i, x in enumerate(lst):
d[x].append(i)
print(list(d.values()))
# [[0, 9], [1, 2], [3], [4], [5, 6], [7], [8]]
Which also maintains order of indices added without sorting.
回答3:
This solution is a modification of hash counting but instead of counting, just store the index of the value found.
arr = [1,2,2,5,8,3,3,9,0,1]
d = dict()
for i,v in enumerate(arr):
d[v] = d.get(v,[]) #use an if-statement to avoid doing this too often
d[v].append(i)
print(d.values())
回答4:
Not sure why you'd want to "avoid loops", since there's no way of knowing the functions you're calling aren't using loops anyway, adding the overhead of a function call.
Also, after grouping, you lose information about what it is grouping - so putting the output in a dict
seems to make more sense.
This does that:
from itertools import groupby
l = [1, 2, 2, 5, 8, 3, 3, 9, 0, 1]
result = {
key: [item[0] for item in group]
for key, group in groupby(sorted(enumerate(l), key=lambda x: x[1]), lambda x: x[1])
}
print(result)
Output:
{0: [8], 1: [0, 9], 2: [1, 2], 3: [5, 6], 5: [3], 8: [4], 9: [7]}
来源:https://stackoverflow.com/questions/56641420/efficient-way-to-group-indices-of-the-same-elements-in-a-list