Using wget and cron to download webpages

邮差的信 提交于 2019-12-25 00:43:01

问题


Ok, so I know i can use:

wget -r <website> > <file>

to get a webpage and save it. My question is, how would i use cron and wget to get a webpage on an hourly, or even minute basis, and then save them into a folder, zip and tarball it, and then keep adding to it for a review at a later date.

I know i can manually do this, my goal is to basically download it ever 10- 20 minutes, for roughly 4 hours (doesn't matter if it goes longer) and append the all into a nice directory, then zip said directory to conserve space, and check them later in the day.


回答1:


To edit cron table

crontab -e

You can add an entry like this

0,20,40 * * * *  wget URL ~/files/file-`date > '+%m%d%y%H%M'`.html &

To download/save the file every 20 mins.

Here it is a small reference about crontab expressions so you can adjust the values

To TAR the files automatically the crontab would be slightly complex:

0,20,40 * * * *  wget URL > ~/files`date '+%m%d%y'`/file-`date '+%H%M'`.html &
* 12 * * *       tar cvf ~/archive-`date '+%m%d%y'`.tar ~/files`date '+%m%d%y'`

This would do it at noon, if you want to do it at mifnight it's more complex because you need to TAR the previous day but I think with this you'll get the idea.




回答2:


Or without cron:

for i in `seq 1 10`; do wget -r http://google.de -P $(date +%k_%M) && sleep 600; done

10 times, every 10 minutes

EDIT: Use zip like this

zip foo.zip file1 file2 allfile*.html


来源:https://stackoverflow.com/questions/4210840/using-wget-and-cron-to-download-webpages

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!