On a Linux desktop (RHEL4) I want to extract a range of bytes (typically less than 1000) from within a large file (>1 Gig). I know the offset into the file and the size of t
Try dd:
dd skip=102567 count=253 if=input.binary of=output.binary bs=1
head -c + tail -c
Not sure how it compare to dd in efficiency, but it is fun:
printf "123456789" | tail -c+2 | head -c3
picks 3 bytes, starting at the 2nd one:
234
See also: https://stackoverflow.com/a/1272995/895245
The dd command can do all of this. Look at the seek and/or skip parameters as part of the call.
This is an old question, but I'd like to add another version of the dd command that is better-suited for large chunks of bytes:
dd if=input.binary of=output.binary skip=$offset count=$bytes iflag=skip_bytes,count_bytes
where $offset and $bytes are numbers in byte units.
The difference with Thomas's accepted answer is that bs=1 does not appear here. bs=1 produces the input and output block size to be 1 byte, which makes it terribly slow when the number of bytes to extract is large.
Even faster
dd bs=<req len> count=1 skip=<req offset> if=input.binary of=output.binary