HPC

Eli  Dart
Session chairEli Dart
TrackTrack 2
DateTuesday, 13 September 2022
Time14:00 - 15:30
DescriptionData is becoming the main driving force behind industrial, research, and social progress. By 2025, an estimated 463 exabytes of data will be created daily worldwide [1]. The challenge of analysing huge amount of data is forcing significant advancements in computer processing capacity. HPC is the discipline in computer science in which supercomputers are used to solve complex scientific problems.
Much of the current research and development in HPC is focused on Exascale computing. Exascale computing means working towards a system with a floating point performance of at least 1 Exaflop/s (i.e. 1018 or a million million million floating point calculations per second). The first petascale (1015 FLOPS) computer entered operation in 2008 [2]. In June 2020 the Japanese supercomputer Fugaku achieved 1.42 exaFLOPS in HPL-AI benchmark. Some of the main barriers to building a useful Exascale machine are: HW speed, Energy consumption, Fault-tolerance, and Application scalability

In this session, a group of HPC expert will present the different aspects of large scale HPC as well as the potential challenges of exascale computing. The implementation of large-scale HPC quantum computing and AI will also be presented. The presentations will be followed by a panel discussion on the utility and challenges of exascale computing in the Nordics

[1] Desjardins, jeff. “How much data is generated each day?” World Economic Forum. April 17, 2019. https:// www.weforum.org/agenda/2019/04/how-much-data-is-generated-each-day-cf4bddf29f/
[2] National Research Council (U.S.) (2008). The potential impact of high-end capability computing on four illustrative fields of science and engineering. The National Academies. p. 11. ISBN 978-0-309-12485-0.