I have a binary file that has a record structure of 400 24 bit signed big endian integers followed by a 16 bit big endian signed short. What I want to do is this:
I had about a terabyte file which was in 24bit four channel PCM.
I certainly didn't want to touch any other than what part I wanted at a time, so what I did was like this:
import numpy as np
from numpy.lib.stride_tricks import as_strided
rawdatamap = np.memmap('4ch24bit800GBdatafile.pcm', dtype=np.dtype('u1'),mode='r')
# in case of a truncated frame at the end
usablebytes = rawdatamap.shape[0]-rawdatamap.shape[0]%12
frames = int(usablebytes/12)
rawbytes = rawdatamap[:usablebytes]
realdata = as_strided(rawbytes.view(np.int32), strides=(12,3,), shape=(frames,4))
someusefulpart = realdata[hugeoffset:hugeoffset+smallerthanram]&0x00ffffff
This did a copy from the file which was smallerthanram
bytes of memory long.
Note the bytemask! You need it to chop off the most significant byte of the 32 bit word - which will be junk belonging to the previous sample.
You could also apply it to a single datum like this:
scaled_ch2_datum_at_framenum = scalefactor*(realdata[framenum,1]&0x00ffffff)-shiftoffset
It's a bit messy, but as good as it gets for now.
You will actually probably need 64bit system to do this.
NB. This is for little endian data. To handle big endian, you'd use a big-endian dtype in the view, and replace ...&0x00ffffff
with ...&ffffff00>>8