问题
For example, when I open https://stackoverflow.com/ in browser, the browser will download not only the main page, but also images, js, css.
But when I do curl https://stackoverflow.com/
, only the main page html is downloaded. Is there any options of curl
or wget
that can download images/js/css also?
Or any other tools can do this?
回答1:
wget -r
will save everything
wget -r www.your-site.com
来源:https://stackoverflow.com/questions/48000468/how-to-download-links-in-the-html-of-a-url