Downloading an Entire Web Site with wget or HTTrack

You, WGETHTTrack
Back

If you ever need to download an entire Web site, perhaps for off-line viewing, wget can do the job—for example:

$ wget \\
     --recursive \\
     --no-clobber \\
     --page-requisites \\
     --html-extension \\
     --convert-links \\
     --restrict-file-names=windows \\
     --domains website.org \\
     --no-parent \\
         [www.website.org/tutorials/html/](http://web.archive.org/web/20141115162522/http://www.website.org/tutorials/html/ "www.website.org/tutorials/html/")

This command downloads the Web site www.website.org/tutorials/html/.

The options are:

Source from : http://bit.ly/14dEluH

Is that all? NO!! there are number of other ways/options avilable you can use 

wget -p -k http://www.example.com/
wget -A pdf,jpg -m -p -E -k -K -np http://site/path/
wget -m -p -E -k -K -np http://site/path/
wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/
wget --user-agent=Mozilla --content-disposition --mirror --convert-links -E -K -p http://example.com/

But is that all? are those better? NO!!!

The best solution I could came across is HTTracker, it's fast and it organize your content with all the local URLs, so it will be the best solution to scrape and clone a online website to use offline.

© Heshan Wanigasooriya.RSS

🍪 This site does not track you.