I\'ve bumped into a problem while working at a project. I want to \"crawl\" certain websites of interest and save them as \"full web page\" including styles and images in or
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget
, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain
condition in your spider!