LibBi is used for state-space modelling and Bayesian inference on high-performance computer hardware, including multi-core CPUs, many-core GPUs (graphics processing units), and distributed-memory clusters. The staple methods of LibBi are based on sequential Monte Carlo (SMC), also known as particle filtering. These methods include particle Markov chain Monte Carlo (PMCMC) and SMC2. Other methods include the extended Kalman filter and some parameter optimization routines. LibBi consists of a C++ template library and a parser and compiler, written in Perl, for its own modelling language.
iNA is a computational tool for quantitative analysis of fluctuations in biochemical reaction networks. Such fluctuations, also known as intrinsic noise, arise due to the stochastic nature of chemical reactions and cannot be ignored for when some molecules are present only in very low copy numbers as is the case in living cells. The SBML-based software computes statistical measures such as means and standard deviations of concentrations within a given accuracy using the analytical system size expansion. The result of iNA’s analysis can be tested against the computationally much more expensive stochastic simulation algorithm.
WCSLIB is a C library, supplied with a full set of Fortran wrappers, which implements the "World Coordinate System" (WCS) standard in FITS (Flexible Image Transport System). It also includes a PGPLOT-based routine, PGSBOX, for drawing general curvilinear coordinate graticules, and a number of utility programs. The FITS "World Coordinate System" (WCS) convention defines keywords and usage which provide descriptions of astronomical coordinate systems in a FITS image header.
The Common Pipeline Library provides a highly robust set of functions for manipulating signals and images. It is primarily intended for the building of VLT instrument pipelines, but is also useful for generic data handling. It includes a number of useful low-level data types, medium-level data access methods, standard implementations of commonly-used signal processing and data reduction tasks, and dynamic loading of "recipes" for data processing.
fundest is a C/C++ library for robust, non-linear fundamental matrix estimation. The fundamental matrix is a singular 3x3 matrix which relates corresponding points in two views according to the epipolar constraint. fundest computes an estimate which minimizes an appropriate non-linear cost function defined on matching points (currently either Sampson error or symmetric distance of points from their epipolar lines) and includes robust regression techniques for coping with outliers (i.e., mismatched point pairs).
The Scalable Assembler at Notre Dame (SAND) replaces the early stages of the Celera Assembler with scalable versions that can run on collections of commodity computers. By harnessing clusters, clouds, grids, or just random machines in your office, many bioinformatics tasks can be reduced from weeks or months down to minutes or hours.
Makeflow is a workflow engine for executing large complex applications on clusters, clouds, and grids. It can be used to drive several different distributed computing systems, including Condor, SGE, and the included Work Queue system. It does not require a distributed filesystem, so you can use it to harness whatever collection of machines you have available. It is typically used for scaling up data-intensive scientific applications to hundreds or thousands of cores.
GPlates offers a novel combination of interactive plate-tectonic reconstructions, geographic information system (GIS) functionality, and raster data visualization. GPlates enables both the visualization and the manipulation of plate-tectonic reconstructions and associated geological, geophysical, and paleo-geographic data through geological time.