I want to implement a java method which takes URL as input and stores the entire webpage including css, images, js (all related resources) on my disk. I have used Jsoup html
This GitHub project does this, using jSoup. No need to write it again if it already exists!
EDIT: I made an improved version of this class, and added new features :
It can:
Extract URL's from Linked or Inline CSS, eg. for background images, and download & save those too.
It does multithreaded downloading of all the files, (images, scripts, etc.)
Gives details about progress and errors.
Can get HTML frames embedded in the HTML document, and nested frames also.
Some caveats:
Uses JSoup and OkHttp, so you need to have those libraries.
GPL licenced, for now anyway.