Workshop: Introduction - PAW-ATM: Parallel Applications Workshop - Alternatives to MPI
Abstract: The increasing complexity in heterogeneous and hierarchical parallel architectures and technologies has put a stronger emphasis on the need for more effective parallel programming techniques. Traditional low-level approaches place a greater burden on application developers who must use a mix of distinct programming models (MPI, CUDA, OpenMP, etc.) in order to fully exploit the performance of a particular machine. The lack of a unifying parallel programming model that can fully leverage all the available hardware technologies affects not only the portability and scalability of applications but also the overall productivity of software developers and the maintenance costs of HPC applications. In contrast, high-level parallel programming models have been developed to abstract implementation details away from the programmer, delegating them to the compiler, runtime system, and OS. Such alternatives to traditional MPI+X programming include parallel programming languages (Chapel, Fortran, UPC, Julia), systems for large-scale data processing and analytics (Spark, Tensorflow, Dask), and frameworks and libraries that extend existing languages (Charm++, Unified Parallel C++ (UPC++), Coarray C++, HPX, Legion, Global Arrays). While there are tremendous differences between these approaches, all strive to support better programmer abstractions for concerns such as data parallelism, task parallelism, dynamic load balancing, and data placement across the memory hierarchy.
This workshop will bring together applications experts who will present concrete practical examples of using such alternatives to MPI in order to illustrate the benefits of high-level approaches to scalable programming.
Back to PAW-ATM: Parallel Applications Workshop - Alternatives to MPI Archive Listing
Back to Full Workshop Archive Listing