Projects / MPICH / Releases / MVAPICH2

RSS All releases tagged MVAPICH2

Release Notes: This release adds high-performance MPI communication from NVIDIA GPU device memory (to/from other devices and host memory) with IPC, collectives ,and datatype support; CPU binding granularity at socket and numanode level; checkpoint-restart and run-through stabilization with Nemesis; suspend/resume; and enhanced integration with SLURM and PBS. Network Fault Resiliency (NFR) has been added.

Release Notes: This release fixes a data validation issue in GPU transfers, tunes CUDA block size to 256K for better performance, enhances error checking for CUDA library calls, and fixes a mpirun_rsh issue while launching applications on Linux Kernels.

  •  14 Nov 2011 22:48

Release Notes: This release adds iWARP interoperability between Intel NE020 and Chelsio T4 adapters, space optimization in regards to buffer usage, and MPI communication from NVIDIA GPU device memory (including intra-node point-to-point communication for multi-GPU adapters/node and RDMA-based inter-node point-to-point communication to/from GPUs). Optimizations for collectives and one-sided communication.

  •  22 May 2011 09:21

Release Notes: Enhancements include a Nemesis-based interface, process manager support, rail binding with processes for multirail configurations, message coalescing, large data transfers, dynamic process migration, fast process-level fault-tolerance with checkpoint-restart, network-level fault-tolerance with Automatic Path Migration, RDMA CM support, iWARP support, optimized collectives, multi-pathing, RDMA read- and write-based designs, polling and blocking-based communication progress, multi-core optimized and scalable shared memory support, and LiMIC2-based kernel-level shared memory support.

Release Notes: The Shared-Memory-Nemesis interface was added, providing native shared memory support on multi-core platforms where communication is required only within a node. Support for 3D torus topology with appropriate SL settings was added. Quality of Service (QoS) support with multiple InfiniBand SL was added. GPU acceleration support was added. Fast Checkpoint-Restart support with an aggregation scheme was added. Fault tolerance support was improved. Dynamic detection of multiple InfiniBand adapters was implemented, and they are used by default in multi-rail configurations. Multithreading was enhanced. There were other enhancements and bugfixes.

Release Notes: This release adds message coalescing, hot-spot avoidance, application-initiated systems-level checkpointing, APM support, multi-rail support for iWARP, RDMA read, and blocking support. Assorted bugfixes.

Release Notes: This release adds RDMA CM-based on-demand connection management for OpenFabrics Gen2-* interfaces, uDAPL on-demand connection management, message coalescing support to enable reduction of per Queue-pair send queues, a Hot-Spot Avoidance Mechanism (HSAM) for alleviating network-congestion in large scale clusters, RDMA Read utilization for increased overlap of computation and communication for OpenFabrics devices, and support for an OpenFabrics Gen2-iWARP interface and RDMA CM.

  •  21 Oct 2007 08:03

Release Notes: Enhance udapl initialization. Minor bugfixes and code cleanups.

Release Notes: New message coalesing, hot-spot avoidance, application-initiated systems-level checkpointing, APM support, multi-rail support for iWARP, on-demand connection management for iWARP and uDAPL (including Solaris), RDMA read, and blocking support. The software was also updated to MPICH2 1.0.5p4.

Release Notes: This release added checkpoint/restart, OpenFabrics Gen2/iWarp, RDMA CM-based connection management support, shared memory optimizations for collective communication operations, and uDAPL support for the NetEffect 10GigE adapter.

Screenshot

Project Spotlight

FísicaLab

An educational application for solving physics problems.

Screenshot

Project Spotlight

ZXTune

A portable cross-platform library and a set of applications for chiptunes playback.