you can either use wget to download the webpage or use Firefox Ctrl+S (Save As) Firefox_Save_Webpage_As_Html_For_Offline_Reading

[cc lang=”bash” escaped=”true” width=”600″]

wget http://domain.com/website.html; # download webpage

# become root
sudo bash;
# or
su;

apt-get update; apt-get install lynx;

lynx -dump “website.html” | awk ‘/http/{print $2}’| grep -E ‘mp3|MP3’ > LinkList.txt;

# extract all links that have mp3 or MP3 in it

# FROGET ABOUT: sed -n ‘s/.*href=”\([^”]*\).*/\1/p’ website.html IT WILL NOT EXTRACT ALL LINKS!!!

wget -i LinkList.txt; # download all links in that list to the current directory

[/cc]

AWESOME! 🙂

liked this article?

  • only together we can create a truly free world
  • plz support dwaves to keep it up & running!
  • (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
  • really really hate advertisement
  • contribute: whenever a solution was found, blog about it for others to find!
  • talk about, recommend & link to this blog and articles
  • thanks to all who contribute!
admin