The GOBLIN project consists of a C++ class library for a large series of graph optimization problems, GOSH, an extension of the Tcl/Tk scripting language to graph objects, and GOBLET, a graphical user interface to the library functions. GOBLET includes a graph editor and supports the standard graph layout methods.
Hanzim ("Hanzi Master") is an interactive visual dictionary for learning and seeing relationships between Chinese radicals, characters, and compounds. All the characters with a given radical, phonetic component, or pronunciation can be displayed, and all words containing a character, with English meanings. All data is stored locally. Either simplified or traditional characters can be used.
Ding is a dictionary lookup program for the X Window system on Linux/Unix. It comes with a German-English Dictionary with about 253,000 entries. It is based on Tk version >= 8.3 and uses the agrep or egrep tools for searching. In addition ding can also search in English dictionaries using dict(1) and check spelling using ispell(1). It has many configuration options, such as search preferences, interface language (English or German), and colors. It has history and help functions and comes with useful key and mouse bindings for quick and easy lookups.
The EnRus dictionary tools are TCL/Tk scripts for reading a textual (plain or compressed by gzip or bzip2) dictionary base and compiling new dictionary bases from plain text files. It consists of a few TCL console scripts and a Tk interface to them. It is configurable for different languages. The dictionary base may contain proper formatting and output procedures.
Ellogon is a multi-lingual, cross-platform, general-purpose language engineering environment, developed in order to aid both researchers who are doing research in computational linguistics, as well as companies who produce and deliver language engineering systems. As a language engineering platform, it offers an extensive set of facilities, including tools for processing and visualising textual/HTML/XML data and associated linguistic information, support for lexical resources (like creating and embedding lexicons), tools for creating annotated corpora, accessing databases, comparing annotated data, or transforming linguistic information into vectors for use with various machine learning algorithms.