SCAN is a personal information retrieval framework, combining search, text analysis, tagging, and metadata functions for document collections management. SCAN is a component-based software using a number of plugins for specific features. The basic SCAN platform can be easily extended with plugins for different document formats and document location types.
An Gramadóir is a grammar checking engine that is designed for the rapid development of grammar checkers for minority languages and other languages with limited computational resources. Rule specifications are given according to a simple syntax combining XML and regular expressions. Part-of-speech tagging can be learned from text corpora using statistical methods. It is currently implemented for Irish (Gaeilge).
Ellogon is a multi-lingual, cross-platform, general-purpose language engineering environment, developed in order to aid both researchers who are doing research in computational linguistics, as well as companies who produce and deliver language engineering systems. As a language engineering platform, it offers an extensive set of facilities, including tools for processing and visualising textual/HTML/XML data and associated linguistic information, support for lexical resources (like creating and embedding lexicons), tools for creating annotated corpora, accessing databases, comparing annotated data, or transforming linguistic information into vectors for use with various machine learning algorithms.
Transolution is a Computer Aided Translation (CAT) suite supporting the XLIFF standard. It provides the open source community with features and concepts that have been used by commercial offerings for years to improve translation efficiency and quality. The suite is modular to make it flexible and provides an XLIFF Editor, translation memory engine and filters to convert different formats to and from XLIFF. The use of XLIFF means that almost any content can be localized as long as there is a filter for it (XML, SGML, PO, RTF, StarOffice/OpenOffice, etc.).
The Computational Linguistics Toolset is a set of tools for computational linguistics. It contains re-usable code for cleaning, splitting, refining, and taking samples from corpora (ICE, Penn, and a native one), for tagging them using the TnT-tagger, for doing permutation statistics on N-grams (useful for finding statistically significant syntactical differences between any two sets of tagged texts), and various examination-tools. The tools themselves are well documented.
FramerD is a semi-structured object database integrated with a Scheme-based scripting language which supports multi-lingual programming (with pervasive Unicode), a stable module system for programming in the large, distributed applications (via an extensible RPC protocol), non-deterministic (PROLOG-like) evaluation for search and set operations, multi-threaded program execution, extensive tools for text and language analysis, built-in HTML/XML/MIME parsers, and intuitive (CGI- and FastCGI-based) Web scripting. The built-in object database robustly supports millions of objects and indexed access to those objects, both through disk files and networked servers.
OpenEphyra is a question answering (QA) system. It retrieves answers to natural language questions from the Web and other sources. OpenEphyra comes with implementations of algorithms that proved effective in Carnegie Mellon's Ephyra system, which participated in the TREC evaluations. It is platform independent and can be set up in just a few minutes. The goal of this project is to give researchers the opportunity to develop new QA techniques without worrying about the end-to-end system.