Alkaline is a full-featured standalone search and index server. The spider is a fully remote indexing daemon which includes support for all standards like robots.txt and "skip" meta tags, and allows multiple distinct configurations and search groups (searching many different sites from your server), including complex regexp indexing paths, authentification, filters for various document formats, XML-based online management and statistics, mrtg-compatible perf numbers, and more.
ASPseek is an Internet search engine, written in C++ using the STL library. It consists of an indexing robot, a search daemon, and a search frontend (CGI or Apache module). It can index as many as a few million URLs and search for words and phrases, use wildcards, and do a Boolean search. Search results can be limited to time period given, site, or Web space (set of sites) and sorted by relevance (PageRanks are used) or date. It is optimized for multiple sites (threaded index, async DNS lookups, grouping results by site, and Web spaces), but can be used for searching one site as well. It can work with multiple languages/encodings at once (including multi-byte encodings such as Chinese) due to optional Unicode storage mode. Other features include stopwords and ispell support, a charset and language guesser, HTML templates for search results, excerpts, and query words highlighting.
DirList is a user directory system that runs as a CGI to serve up user lists, search for various user attributes, view their web sites, define personalised user attributes, and keep it all synchronized automatically with the underlying operating system's user database on periodic intervals with cron.
Greenstone is a complete digital library creation, management, and distribution package for Unix, Windows, and Mac OS X. Users create collections by gathering a set of input documents, specifying a configuration file, and running the build script. It provides full-text and fielded searching, browsable indexes, customised formatting, metadata extraction (acronyms, languages, etc), a Z39.50 client, and many other features. It supports many input formats, the interface is configurable and multi-lingual, and collections can be distributed on the Web or on CD-ROM.
ht://Check is a link checker derived from ht://Dig. It can retrieve information through HTTP/1.1 and store it in a MySQL database so that after a "crawl", ht://Check can return broken links, anchors not found, content-types, and HTTP status codes summaries. ht://Check also performs accessibility checks in accordance with the principles of the University of Toronto's Open Accessibility Checks (OAC) project, allowing users to discover site-wide barriers like images without proper alternatives, missing titles, etc. A PHP interface lets the user query and view the results directly via the Web.
The ht://Dig system is a complete WWW indexing and searching system for a domain or intranet. This system is not meant to replace the need for internet-wide search systems like Lycos, Infoseek, Google, and AltaVista. Instead, it is meant to cover the search needs for a single company, campus, or even a particular sub-section of a Web site.
SWISH++ is a Unix-based file indexing and searching engine (typically used to index and search files on web sites). It was based on SWISH-E although SWISH++ is a complete rewrite. SWISH++ is at least 10 times faster and can handle much larger numbers of files. Additionally, it has unique features such as selective non-indexing, on-the-fly filters, user-selectable stemming, and more.
MRML for KDE is a KDE MRML client which offers integration of content-based queries into Konqueror/KDE. It consists of a KPart (which could be embedded into Konqueror for example) and a kio-slave for communication with the MRML server. You can right-click on an image and choose "Search for similar images..." in the context-menu, which will perform a query on a server and present you a thumbnail view of the results. You can also refine queries by giving relevance feedback.
Grub-client is a distributed crawling client, used to create an infrastructure that provides URL update status information for Web pages on the Internet. Grub's distributed crawler network will enable Web sites, content providers, and individuals to notify others that changes have occurred in their content, all in real time. Clients are ranked by the numbers of URLs that are crawled, both on their own machines and other servers.