DescriptionHPC simulations and big data applications face similar challenges when solving extreme-scale scientific problems: high algorithmic complexity and large memory footprint. CMOS and memory technology scaling have continuously mitigated these challenges with an exponential growth in processor performance and a constant increase in speed and capacity, respectively. The free lunch is perhaps over but now artificial intelligence comes to the rescue, with the sheer volume of data to be crunched.
This has caused the community to explore disruptive hardware and software solutions to overcome both challenges. One of the trends consists of exploring numerical approximations, e.g., mixed/low precision and hierarchical low-rank matrix computations, with hardware support for a variety of precisions.
The panelists investigate alternatives to today’s state-of-the-art chips and numerical algorithms. The main idea consists in trading-off accuracy for performance throughout the hardware and software ecosystems in the context of exascale scientific computing.