How to sort a 3G bytes access log file?

自闭症网瘾萝莉.ら 提交于 2020-01-16 19:15:02

问题


Hi all: Now I have a 3G bytes tomcat access log named urls, each line is a url. I want to count each url and sort these urls order by the number of each url. I did it this way:

awk '{print $0}' urls | sort | uniq -c | sort -nr >> output

But it took really long time to finish this job, it's already took 30 minutes and its still working. log file is like bellow:

/open_api/borrow_business/get_apply_by_user
/open_api/borrow_business/get_apply_by_user
/open_api/borrow_business/get_apply_by_user
/open_api/borrow_business/get_apply_by_user
/loan/recent_apply_info?passportId=Y20151206000011745
/loan/recent_apply_info?passportId=Y20160331000000423
/open_api/borrow_business/get_apply_by_user
...

Is there any other way that I could process and sort a 3G bytes file? Thanks in advance!


回答1:


I'm not sure why you're using awk at the moment - it's not doing anything useful.

I would suggest using something like this:

awk '{ ++urls[$0] } END { for (i in urls) print urls[i], i }' urls | sort -nr

This builds up a count of each URL and then sorts the output.




回答2:


I generated a sample file of 3,200,000 lines, amounting to 3GB, using Perl like this:

perl -e 'for($i=0;$i<3200000;$i++){printf "%d, %s\n",int rand 1000, "0"x1000}' > BigBoy

I then tried sorting it in one step, followed by splitting it into 2 halves and sorting the halves separately and merging the results, then splitting into 4 parts and sorting separately and merging, then splitting into 8 parts and sorting separately and merging.

This resulted, on my machine at least, in a very significant speedup.

Here is the script. The filename is hard-coded as BigBoy, but could easily be changed and the number of parts to split the file into must be supplied as a parameter.

#!/bin/bash -xv
################################################################################
# Sort large file by parts and merge result
#
# Generate sample large (3GB with 3,200,000 lines) file with:
# perl -e 'for($i=0;$i<3200000;$i++){printf "%d, %s\n",int rand 1000, "0"x1000}' > BigBoy
################################################################################
file=BigBoy
N=${1:-1}
echo  $N
if [ $N -eq 1 ]; then
   # Straightforward sort
   sort -n "$file" > sorted.$N
else
   rm sortedparts-* parts-* 2> /dev/null
   tlines=$(wc -l < "$file")
   echo $tlines
   ((plines=tlines/N))
   echo $plines
   split -l $plines "$file" parts-
   for f in parts-*; do
      sort -n "$f" > "sortedparts-$f" &
   done
   wait
   sort -n -m sortedparts-* > sorted.$N
fi

Needless to say, the resulting sorted files are identical :-)



来源:https://stackoverflow.com/questions/36834994/how-to-sort-a-3g-bytes-access-log-file

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!