Release Notes: This release switches the build system to using CMake. It adds some fancy interactive demos and ipython notebooks that you can also run in the cloud (see the links for further stats). There are other new features and many internal improvements, bugfixes, and documentation improvements. To speak in numbers, this release merges more than 2000 commits changing almost 400000 lines in more than 7000 files, and increases the number of unit tests from 50 to 600.
Release Notes: This release contains over 800 commits since 2.0.0 with a load of bugfixes, new features, and improvements that make Shogun more efficient, robust, and versatile. In particular, there is an initial alpha version of a Perl modular interface, Linear Time MMD on Streaming Data, a new structured output solver, and support for tapkee, a dimension reduction framework.
Release Notes: This major update adds many improvements, new features, and bugfixes. It includes everything which has been carried out before and during the Google Summer of Code 2012. Students have implemented various new features such as structured output learning, gaussian processes, latent variable SVM (and structured output learning), statistical tests in kernel reproducing spaces, various multitask learning algorithms, and various usability improvements, to name a few.
Release Notes: This release introduced the concept of 'converters', which enables you to construct embeddings of arbitrary features. It also includes a few new dimension reduction techniques and significant performance improvements in the dimensionality reduction toolkit. Other improvements include a significant compilation speed-up, various bugfixes for modular interfaces and algorithms, and improved Cygwin, Mac OS X, and clang++ compatibility. Github Issues is now used for tracking bugs and issues.
Release Notes: This release features interfaces to new languages including Java, C#, Ruby, and Lua, a model selection framework, many dimension reduction techniques, Gaussian Mixture Model estimation, and a full-fledged online learning framework.
Release Notes: This is a major new release with lots of internal but also user visible changes. First of all, it now includes a number of applications (in the applications folder) and all the data sets are now contained in a separate tarball. For the user, the most interesting and important feature is serialization support. One can now dump any shogun object to disk and load it later on. Supported serialization formats include .hdf5, ascii, .json, XML formats, and Python pickle version 1 and 2.
Release Notes: This release contains several enhancements, cleanups, and bugfixes. A number of new string kernels and multi-class MKL were implemented. Support for python-dbg was added. Floats are now accepted as input for custom kernels that now can be more than 4GB in size. Python installation uses distutils now. Static linking has been fixed, as well as the sparse linear kernels add_to_normal function.
Release Notes: This release adds support for direct reading and writing of ASCII/Binary files/HDF5 based files. It implements a new multi task kernel normalizer, elastic net based MKL, and hashed features. A newer version of liblinear was integrated. Documentation was improved. Much more was done.
Release Notes: Chinese documentation was added. StringFileFeatures were implemented, which are string features that don't have to fit in memory (and are dynamically fetched from disk). In addition, one can now load compressed strings (using lzo, gzip, lzma, or bzip2 compression) that are decompressed only on access using the Decompress preprocessor. Many configure issues and problems with custom kernels and svmlight-regression should be fixed.
Release Notes: With over 600 examples and polished tutorials, this release contains major documentation and Webpage improvements. In addition, several bugs have been fixed and cleanups performed (including the removal of the confusing init_kernel command). Several new methods have been implemented, including a domain adaptation support vector machine and multiclass multiple kernel learning.