Crawl starts a depth-first traversal of the Web at the specified URLs. It stores all JPEG images that match the configured constraints. Crawl is fairly fast and allows for graceful termination. After terminating crawl, it is possible to restart it at exactly the same spot where it was terminated. It also keeps a persistent database that allows multiple crawls without revisiting sites.
|Tags||Internet Web Indexing/Search|
Release Notes: Crawling is more reliable, and crawl performance has improved for large crawls.
Release Notes: A complete rewrite for higher performance, including support for all media types, asynchronous DNS lookups, and optional wait time between hosts. The configuration file specifies permittable size of media objects depending on media-type, etc.
Release Notes: This release has portability fixes, and supports downloading of media types other than just images.
Release Notes: This release fixes a bug where crawl would stop early when encountering errors on subsequent connections. The verbosity level and number of concurrent connections are tunable now
No changes have been submitted for this release.