h5py: Correct way to slice array datasets

后端 未结 3 1173
北恋
北恋 2020-12-24 08:14

I\'m a bit confused here:

As far as I have understood, h5py\'s .value method reads an entire dataset and dumps it into an array, which is slow and disco

3条回答
  •  天命终不由人
    2020-12-24 08:55

    For fast slicing with h5py, stick to the "plain-vanilla" slice notation:

    file['test'][0:300000]
    

    or, for example, reading every other element:

    file['test'][0:300000:2]
    

    Simple slicing (slice objects and single integer indices) should be very fast, as it translates directly into HDF5 hyperslab selections.

    The expression file['test'][range(300000)] invokes h5py's version of "fancy indexing", namely, indexing via an explicit list of indices. There's no native way to do this in HDF5, so h5py implements a (slower) method in Python, which unfortunately has abysmal performance when the lists are > 1000 elements. Likewise for file['test'][np.arange(300000)], which is interpreted in the same way.

    See also:

    [1] http://docs.h5py.org/en/latest/high/dataset.html#fancy-indexing

    [2] https://github.com/h5py/h5py/issues/293

提交回复
热议问题