How can I tail a zipped file without reading its entire contents?

廉价感情. 提交于 2020-01-20 02:56:26

问题


I want to emulate the functionality of gzcat | tail -n.

This would be helpful for times when there are huge files (of a few GB's or so). Can I tail the last few lines of such a file w/o reading it from the beginning? I doubt that this won't be possible since I'd guess for gzip, the encoding would depend on all the previous text.

But still I'd like to hear if anyone has tried doing something similar - maybe investigating over a compression algorithm that could provide such a feature.


回答1:


No, you can't. The zipping algorithm works on streams and adapts its internal codings to what the stream contains to achieve its high compression ratio.

Without knowing what the contents of the stream are before a certain point, it's impossible to know how to go about de-compressing from that point on.

Any algorithm which allows you to de-compress arbitrary parts of it will require multiple passes over the data to compress it.




回答2:


BGZF is used to created index gzip compressed BAM files created by Samtools. These are randomly accessible.

http://samtools.sourceforge.net/




回答3:


If you have control over what goes into the file in the first place, if it's anything like a ZIP file you could store chunks of predetermined size with filenames in increasing numerical order and then just decompress the last chunk/file.




回答4:


If it's an option, then bzip2 might be a better compression algorithm to use for this purpose.

Bzip2 uses a block compression scheme. As such, if you take a chunk of the end of your file which you are sure is large enough to contain all of the last chunk, then you can recover it with bzip2recover.

The block size is selectable at the time the file is written. In fact that's what happens when you set -1 (or --fast) to -9 (or --best) as compression options, which correspond to block sizes of 100k to 900k. The default is 900k.

The bzip2 command line tools don't give you a nice friendly way to do this with a pipeline, but then given bzip2 is not stream oriented, perhaps that's not surprising.




回答5:


zindex creates and queries an index on a compressed, line-based text file in a time- and space-efficient way.

https://github.com/mattgodbolt/zindex




回答6:


An example of a fully gzip-compatible pseudo-random access format is dictzip:

For compression, the file is divided up into "chunks" of data, each chunk is less than 64kB. [...]

To perform random access on the data, the offset and length of the data are provided to library routines. These routines determine the chunk in which the desired data begins, and decompresses that chunk. Consecutive chunks are decompressed as necessary."




回答7:


Well, you can do that if you previously creates an index for each file ...

I've developed a command line tool which creates indexes for gzip files which allow for very quick random access inside them, and it does this interleaved with actions (extract, tail, continuous tail, etc): https://github.com/circulosmeos/gztool

But you can do a tail (-t), and the index will be automatically created: if you're gonna do the same in the future it'll be much quicker, and anyway the first time it will take the same time as a gunzip | tail:

$ gztool -t my_file.gz


来源:https://stackoverflow.com/questions/1183001/how-can-i-tail-a-zipped-file-without-reading-its-entire-contents

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!