On Advanced Monte Carlo Methods for Linear Algebra on Advanced Accelerator Architectures
Authors: Vassil Alexandrov (ICREA, Barcelona Supercomputing Center)
Abstract: In this paper we present computational experiments performed using the Markov Chain Monte Carlo Matrix Inversion (MCMCMI) on several architectures of NVIDIA accelerators and two iterations of the Intel x86 architecture and investigate their impact on performance and scalability of the method. The method is used as a preconditioner and iterative methods, such as generalized minimal residuals (GMRES) or bi-conjugate gradient stabilized (BICGstab), are used for solving the corresponding system of linear equations. Numerical experiments are carried out to highlight the benefits and deficiencies of both architecture types and to assess their overall usefulness in light of the scalability of the method.
Back to 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems Archive Listing
Back to Full Workshop Archive Listing