Grouping indices of unique elements in numpy

前端 未结 5 641
伪装坚强ぢ
伪装坚强ぢ 2021-01-17 17:15

I have many large (>100,000,000) lists of integers that contain many duplicates. I want to get the indices where each of the element occur. Currently I am doing something li

5条回答
  •  一个人的身影
    2021-01-17 17:38

    This is very similar to what was asked here, so what follows is an adaptation of my answer there. The simplest way to vectorize this is to use sorting. The following code borrows a lot from the implementation of np.unique for the upcoming version 1.9, which includes unique item counting functionality, see here:

    >>> a = np.array([1, 2, 6, 4, 2, 3, 2])
    >>> sort_idx = np.argsort(a)
    >>> a_sorted = a[idx]
    >>> unq_first = np.concatenate(([True], a_sorted[1:] != a_sorted[:-1]))
    >>> unq_items = a_sorted[unq_first]
    >>> unq_count = np.diff(np.nonzero(unq_first)[0])
    

    and now:

    >>> unq_items
    array([1, 2, 3, 4, 6])
    >>> unq_count
    array([1, 3, 1, 1, 1], dtype=int64)
    

    To get the positional indices for each values, we simply do:

    >>> unq_idx = np.split(sort_idx, np.cumsum(unq_count))
    >>> unq_idx
    [array([0], dtype=int64), array([1, 4, 6], dtype=int64), array([5], dtype=int64),
     array([3], dtype=int64), array([2], dtype=int64)]
    

    And you can now construct your dictionary zipping unq_items and unq_idx.

    Note that unq_count doesn't count the occurrences of the last unique item, because that is not needed to split the index array. If you wanted to have all the values you could do:

    >>> unq_count = np.diff(np.concatenate(np.nonzero(unq_first) + ([a.size],)))
    >>> unq_idx = np.split(sort_idx, np.cumsum(unq_count[:-1]))
    

提交回复
热议问题