Babeldoc is a framework and set of applications to process documents for business-to-business and other Internet/integration applications. It is primarily intended for text documents, especially XML, but supports a wide range of operations and data types. It has a sophisticated journaling system that supports replaying and reprocessing. Babeldoc is pipeline based and supports numerous ways to combine the pipeline stages in a dynamically reconfigurable fashion. It has a GUI and a Web-based console for document processing and monitoring, and comes with tools for the tranformation of flatfile data to XML, archival, and cryptography. Additionally it is able to scan various data sources based on sophisticated constraints.
auth_ldap is an LDAP authentication module for the Apache Web server. Auth_ldap has excellent performance, due to its use of aggressive client-side caching algorithms. It also has support for LDAP over SS or TLSL. It also features a mode that lets Microsoft Frontpage clients manage their Web permissions while still using LDAP for authentication.
Rambutan is a set of end-user applications software that assists a system analyst in the gathering and categorization of facts for a requirements specification document. In its current state, the product consists of two programs that perform similar functions. A handheld application is used to gather facts in the client's site while a desktop application is used to edit and further refine the requirement statements in the analyst's office. Both applications allow the user to enter, modify, and display data that make up a requirements specification document.
This C code module gives the Apache 1.3.x HTTP server the ability to correctly negotiate content types for XHTML documents. Currently these are sent as text/HTML for most Web sites. This gives maximum compatibility among older Web clients, but is not recommended practice according to the W3C.
Douglas Thrift's Search Engine is an indexing search engine for use on small Web sites such as personal or small business sites. It is designed to be very similar to Google for end users and its output is customizable. For indexing, it supports both the Robots Exclusion Protocol and the Robots META Tag.