Is there a \"canonical\" way of doing that? I\'ve been using head -n | tail -1
which does the trick, but I\'ve been wondering if there\'s a Bash tool that speci
All the above answers directly answer the question. But here's a less direct solution but a potentially more important idea, to provoke thought.
Since line lengths are arbitrary, all the bytes of the file before the nth line need to be read. If you have a huge file or need to repeat this task many times, and this process is time-consuming, then you should seriously think about whether you should be storing your data in a different way in the first place.
The real solution is to have an index, e.g. at the start of the file, indicating the positions where the lines begin. You could use a database format, or just add a table at the start of the file. Alternatively create a separate index file to accompany your large text file.
e.g. you might create a list of character positions for newlines:
awk 'BEGIN{c=0;print(c)}{c+=length()+1;print(c+1)}' file.txt > file.idx
then read with tail
, which actually seek
s directly to the appropriate point in the file!
e.g. to get line 1000:
tail -c +$(awk 'NR=1000' file.idx) file.txt | head -1