Efficient Algorithms for Collective Operations with Notified Communication in Shared Windows
Abstract: Collective operations are commonly used in various parts of scientific applications. Especially in strong scaling scenarios, collective operations can negatively impact the overall applications performance: while the load per rank decreases with increasing core counts, time spent in e.g. barrier operations will increase logarithmically with the core count.
In this article, we develop novel algorithmic solutions for collective operations -- such as Allreduce and Allgather(V) -- by leveraging notified communication in shared windows. To this end, we have developed an extension of GASPI which enables all ranks participating in a shared window to observe the entire notified communication targeted at the window. By exploring benefits of this extension, we deliver high performing implementations of Allreduce and Allgather(V) on Intel and Cray clusters. These implementations clearly achieve 2x-4x performance improvements compared to the best performing MPI implementations for various data
Back to PAW-ATM: Parallel Applications Workshop - Alternatives to MPI Archive Listing
Back to Full Workshop Archive Listing