Numpy concatenate is slow: any alternative approach?

前端 未结 5 1045
眼角桃花
眼角桃花 2020-12-16 00:48

I am running the following code:

for i in range(1000)
    My_Array=numpy.concatenate((My_Array,New_Rows[i]), axis=0)

The above code is slow

相关标签:
5条回答
  • 2020-12-16 01:09

    Maybe creating an empty array with the correct size and than populating it? if you have a list of arrays with same dimensions you could

    import numpy as np 
    arr = np.zeros((len(l),)+l[0].shape) 
    for i, v in enumerate(l):
       arr[i] = v
    

    works much faster for me, it only requires one memory allocation

    0 讨论(0)
  • 2020-12-16 01:10

    I think @thebeancounter 's solution is the way to go. If you do not know the exact size of your numpy array ahead of time, you can also take an approach similar to how vector class is implemented in C++.

    To be more specific, you can wrap the numpy ndarray into a new class which has a default size which is larger than your current needs. When the numpy array is almost fully populated, copy the current array to a larger one.

    0 讨论(0)
  • 2020-12-16 01:21

    This is basically what is happening in all algorithms based on arrays.

    Each time you change the size of the array, it needs to be resized and every element needs to be copied. This is happening here too. (some implementations reserve some empty slots; e.g. doubling space of internal memory with each growing).

    • If you got your data at np.array creation-time, just add these all at once (memory will allocated only once then!)
    • If not, collect them with something like a linked list (allowing O(1) appending-operations). Then read it in your np.array at once (again only one memory allocation).

    This is not much of a numpy-specific topic, but much more about data-strucures.

    Edit: as this quite vague answer got some upvotes, i feel the need to make clear that my linked-list approach is one possible example. As indicated in the comment, python's lists are more array-like (and definitely not linked-lists). But the core-fact is: list.append() in python is fast (amortized: O(1)) while that's not true for numpy-arrays! There is also a small part about the internals in the docs:

    How are lists implemented?

    Python’s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array’s length in a list head structure.

    This makes indexing a list a[i] an operation whose cost is independent of the size of the list or the value of the index.

    When items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don’t require an actual resize.

    (bold annotations by me)

    0 讨论(0)
  • 2020-12-16 01:24

    It depends on what New_Rows[i] is, and what kind of array do you want. If you start with lists (or 1d arrays) that you want to join end to end (to make a long 1d array) just concatenate them all at once. Concatenate takes a list of any length, not just 2 items.

     np.concatenate(New_Rows, axis=0)
    

    or maybe use an intermediate list comprehension (for more flexibility)

     np.concatenate([row for row in New_Rows])
    

    or closer to your example.

     np.concatenate([New_Rows[i] for i in range(1000)])
    

    But if New_Rows elements are all the same length, and you want a 2d array, one New_Rows value per row, np.array does a nice job:

     np.array(New_Rows)
     np.array([i for i in New_Rows])
     np.array([New_Rows[i] for i in range(1000)])
    

    np.array is designed primarily to build an array from a list of lists.

    np.concatenate can also build in 2d, but the inputs need to be 2d to start with. vstack and stack can take care of that. But all those stack functions use some sort of list comprehension followed by concatenate.

    In general it is better/faster to iterate or append with lists, and apply the np.array (or concatenate) just once. appending to a list is fast; much faster than making a new array.

    0 讨论(0)
  • 2020-12-16 01:32

    Assume you have a large list of 2D numpy arrays, with the same number of columns and different number of rows like this :

    x = [numpy_array1(r_1, c),......,numpy_arrayN(r_n, c)]

    concatenate like this:

    while len(x) != 1:
        if len(x) == 2:
            x = np.concatenate((x[0], x[1]))
            break
        for i in range(0, len(x), 2):
            if (i+1) == len(x):
                x[0] = np.concatenate((x[0], x[i]))
            else:
                x[i] = np.concatenate((x[i], x[i+1]))
    
        x = x[::2]
    
    0 讨论(0)
提交回复
热议问题