-Feed Crawler
-============
+HTTRUTA Feed Crawler Project
+============================
Download all links from a feed using httrack. This is the engine behind the
"Cache" feature used by https://links.sarava.org Semantic Scuttle instance.
Place this script somewhere and setup a cronjob like this:
-`*/5 * * * * /var/sites/arquivo/httracker/httracker &> /dev/null`
+`*/5 * * * * /var/sites/arquivo/httruta/httracker &> /dev/null`
TODO
----
- Include all sites already donwloaded by scuttler.
- Support for other fetchers like youtube-dl and quvi.
-- Rename project and repository to "httruta".