Moderator: Sadaf R. Alam, Swiss National Supercomputing Centre
Panelists: Victoria Stodden, University of Illinois; Eli Dart Energy Sciences Network;, Dirk Pleiter, Juelich Supercomputing Centre; David Hancock, Indiana University; Stephen Poole, Los Alamos National Laboratory
Time: Tuesday, November 13th, 10:30am – 12pm
Scientific workflows today, especially those involving large-scale data sources, require an ecosystem of HPC and cloud computing, storage and networking technologies. Thanks to these technologies, it becomes possible to address bigger challenges collectively through federated IT infrastructures. Several initiatives have emerged as part of nationally funded research infrastructure and public cloud providers to facilitate ever-increasing needs of computing and storage capabilities alongside accessibility and quality of service requirements such as interactivity and security. This panel brings together a diverse group of experts and practitioners to unravel myths, misconceptions, and misinformation around growing service portfolios involving HPC and X-as-a-service technologies. In particular, the panelists will reflect on solutions that provide cost-to-performance efficiencies for different quantitative and qualitative metrics, such as need for exascale while maintaining information security together with isolation and customization of services by adopting cloud technologies in HPC and vice-versa, and opportunities to increase computational reproducibility for scientific workflows.