Loading every nth element with numpy.fromfile [duplicate]

99封情书 提交于 2020-03-24 00:29:23

问题


I want to create a numpy array from a binary file using np.fromfile. The file contains a 3D array, and I'm only concerned with a certain cell in each frame.

x = np.fromfile(file, dtype='int32', count=width*height*frames)
vals = x[5::width*height]

The code above would work in theory, but my file is very large and reading it all into x causes memory errors. Is there a way to use fromfile to only get vals to begin with?


回答1:


This may be horribly inefficient but it works:

import numpy as np

def read_in_chunks(fn, offset, step, steps_per_chunk, dtype=np.int32):
    out = []
    fd = open(fn, 'br')
    while True:
        chunk = (np.fromfile(fd, dtype=dtype, count=steps_per_chunk*step)
                 [offset::step])
        if chunk.size==0:
            break
        out.append(chunk)
    return np.r_[tuple(out)]

x = np.arange(100000)
x.tofile('test.bin')
b = read_in_chunks('test.bin', 2, 100, 6, int)
print(b)

Update:

Here's one that uses seek to skip over the unwanted stuff. It works for me, but is totally undertested.

def skip_load(fn, offset, step, dtype=np.float, n = 10**100):
    elsize = np.dtype(dtype).itemsize
    step *= elsize
    offset *= elsize
    fd = open(fn, 'rb') if isinstance(fn, str) else fn
    out = []
    pos = fd.tell()
    target = ((pos - offset - 1) // step + 1) * step + offset
    fd.seek(target)
    while n > 0:
        if (fd.tell() != target):
            return np.frombuffer(b"".join(out), dtype=dtype)
        out.append(fd.read(elsize))
        n -= 1
        if len(out[-1]) < elsize:
            return np.frombuffer(b"".join(out[:-1]), dtype=dtype)
        target += step
        fd.seek(target)
    return np.frombuffer(b"".join(out), dtype=dtype)


来源:https://stackoverflow.com/questions/42424837/loading-every-nth-element-with-numpy-fromfile

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!