Can We Reach Exascale?
SpeakerOle Widar Saastad
TrackTrack 2
SessionHPC
DescriptionThe short answer is yes, it’s already been done, it’s only a matter of will-power.
However, what is Exascale? This subject contain more than just the childish “my machine bigger than yours”. What do we want or expect from an Exascale system ?
We can hope that these large systems representing a considerably number of million Euro can be used for valuable science. Which again open up the question what kind of science and what kind of scientific problems can such a system tackle ?
Stepping back, what do we mean with Exascale ? 10e18 , 64 bit (commonly referred to as double precision) floating point number calculations per second. Note that nothing is so far is about processing of any data.
You want to process data, real data and in large quantities ? That’s something totally differently, it involves different types of memory and storage. At this point we start approaching the real science regime as all scientific computations involve some data in some way. Which again open the question how do we measure performance ? Also known as benchmarking.
There are several well known methods for assessing the performance of a so called supercomputer, many of the tests are in widespread use and some are used in the ranking of the 500 fastest supercomputers in the world. Using the tests employed to rank the top-500 systems we can start at the purely theoretical number which is the floating point number calculations per second, which is found by just multiply cores, clock frequency, vector width etc and published as Rpeak. The next number which is actually used to rank the systems is Rmax which is how fast the systems can solve a linear algebra problem to solve a dense system of linear equations. This is known as the HPL benchmark. While this benchmark do linear algebra, it has far more computation per amount of data the commonly the case in real world scientific application and has come under criticism. To meet this criticism a second benchmark has been introduced which is more in line with real scientific applications, namely HPCG. This which solves a system of Conjugate Gradients. HPCG is designed to exercise computational and data access patterns that more closely match a different and broad set of important applications.
How does these benchmark perform on the different systems and how do they relate to Exascale?
As the majority of the system on the top500 list are supercharged by accelerators which is in most cases Graphic Processing Units (GPUs) yielding very high HLP performance numbers they are not easy to exploit using mainstream legacy scientific applications. Systems like the Fugaku (2. on the HPL top500 list, 1. on the HPCG top500 list) is a pure CPU based system, demonstrating that while accelerators are powerful they are not universal compute elements. Not all scientific codes lend themselves easily to acceleration.
Are the scientific applications actually experiencing Exascale ? Starting with HPCG and selected scientific applications. How far are common scientific applications from Exascale ?
The last metric one might use to assess the fastest (maybe most useful) systems in the world is the Gordon Bell prize which is awarded annually to an outstanding achievement in high-performance computing. Tracking system by means of the Gordon Bell price is another mean of ranging systems.
So we can reach Exascale, but does that give us the best system for science ?
However, what is Exascale? This subject contain more than just the childish “my machine bigger than yours”. What do we want or expect from an Exascale system ?
We can hope that these large systems representing a considerably number of million Euro can be used for valuable science. Which again open up the question what kind of science and what kind of scientific problems can such a system tackle ?
Stepping back, what do we mean with Exascale ? 10e18 , 64 bit (commonly referred to as double precision) floating point number calculations per second. Note that nothing is so far is about processing of any data.
You want to process data, real data and in large quantities ? That’s something totally differently, it involves different types of memory and storage. At this point we start approaching the real science regime as all scientific computations involve some data in some way. Which again open the question how do we measure performance ? Also known as benchmarking.
There are several well known methods for assessing the performance of a so called supercomputer, many of the tests are in widespread use and some are used in the ranking of the 500 fastest supercomputers in the world. Using the tests employed to rank the top-500 systems we can start at the purely theoretical number which is the floating point number calculations per second, which is found by just multiply cores, clock frequency, vector width etc and published as Rpeak. The next number which is actually used to rank the systems is Rmax which is how fast the systems can solve a linear algebra problem to solve a dense system of linear equations. This is known as the HPL benchmark. While this benchmark do linear algebra, it has far more computation per amount of data the commonly the case in real world scientific application and has come under criticism. To meet this criticism a second benchmark has been introduced which is more in line with real scientific applications, namely HPCG. This which solves a system of Conjugate Gradients. HPCG is designed to exercise computational and data access patterns that more closely match a different and broad set of important applications.
How does these benchmark perform on the different systems and how do they relate to Exascale?
As the majority of the system on the top500 list are supercharged by accelerators which is in most cases Graphic Processing Units (GPUs) yielding very high HLP performance numbers they are not easy to exploit using mainstream legacy scientific applications. Systems like the Fugaku (2. on the HPL top500 list, 1. on the HPCG top500 list) is a pure CPU based system, demonstrating that while accelerators are powerful they are not universal compute elements. Not all scientific codes lend themselves easily to acceleration.
Are the scientific applications actually experiencing Exascale ? Starting with HPCG and selected scientific applications. How far are common scientific applications from Exascale ?
The last metric one might use to assess the fastest (maybe most useful) systems in the world is the Gordon Bell prize which is awarded annually to an outstanding achievement in high-performance computing. Tracking system by means of the Gordon Bell price is another mean of ranging systems.
So we can reach Exascale, but does that give us the best system for science ?
Presentation documents
All talks
- Addressing the Skills Gap in Cybersecurity
- After 10 years of cloudification
- An invitation to Quantum networking
- Assurance!
- Automatic verification of MPLS networks
- Better learning environments for the students by working with standards and integrations?
- Borealis Crossing the North Pole. towards US and Asia
- CNaaS in Norway - lessons learned after three years of operations
- Cables as part of Arctic Ocean Observing System (AOOS)
- Campus Network as a Service in Sunet
- Can We Reach Exascale?
- Can we transition away from passwords?
- Closing Plenary Talk
- Cloud Strategy and Data Migration Efforts at UiT, The Arctic University of Norway
- Creating a national CERT: Adapting to scale
- Cyber Inspiration or Never Waste a Good Crisis
- Cybersecurity Collaboration - Bringing Together Colleges and Universities in Ontario, Canada
- Cybersecurity Risks Challenge Nordic NRENs on Security Governance
- Development of the SMART Repeater Sensor System
- EDSSI European digital student service infrastructure
- Education and TF-EDU
- Fiber Optic Sensing in the Arctic Utilizing DAS (Distributed Acoustic Sensing) Technology
- Fiber optic time transfer from UTC (SP) to a VLBI location utilizing Sunet
- Funet Campus Network as a Service - First implementations
- Get ready for the 800 GE reality
- Goodbye Iceland - Hello Norway
- How to Mitigate a Global Supply Crisis
- ISO 27K certification for the University of Iceland
- InAcademia, some Swedish insights
- Introduction to SeQUeNCe, a Customizable Discrete-Event Simulator of Quantum Networks
- It's all about Communications
- Large Scale Distributed Sensing Infrastructures for Earth Observation
- Machine Learning Foundations for Network Operations
- Microdep and the Zero Outage Vision
- Modernising Funet Network Monitoring
- Moving North: Arctic Development in Times of Geopolitical Changes
- Network technology needs for supporting genomics data managent in Europe
- OCRE - Driving Innovation through the adoption of commercial Cloud and EO services
- Official Welcome to Iceland
- Opening Keynote: How HPC contributes to Victories in Professional Cycling
- Operating the NORDUnet Next Generation Network - Challenges and Where to Next?
- Parallel and Scalable Machine Learning
- QKD and Quantum Communication activities in NRENs
- Quantum Computing
- Quantum Internet Alliance: Towards a pan-European
- Quantum Network Testbed
- SSH certificates for a federated world
- SURF Automation
- SciStream: Architecture and Toolkit for Data Streaming between Federated Science Instrument
- SeamlessAccess: Value for researchers, SP's, iDP's and federations
- Securing Research Infrastructure
- Securing identity services on Linux
- Self sovereign identity use cases
- Short Tips to Using a PMP Service
- Software complexity is bad for security
- Technology challenges in exascale supercomputing
- The Cinia/Far North Digital Cable through the North West Passage
- The ESnet6 Approach to Network Orchestration and Automation
- The GÉANT Community eHealth Task Force wants you !
- The Importance of Preparedness: Keys to Successful Business Continuity Planning
- The NEA3R Collaboration: Expanding Support for Global Science
- The Network Automation eAcademy
- The economic value of submarine cables in the Arctic
- The future Eco-system of our Digital learning and NORDUnet
- The needs of science and society for time and frequency services over optical-fibre networks in Europe
- Threat Intel
- Time and frequency distribution in the GÉANT network
- TimeMap - tool for latency and jitter monitoring
- Welcome from the host
- When you hear hoofbeats think Unicorns not Horses: Unique Service Creation in a Commodity World
- Words from the CEO
- eduMEET - secure, private and affordable Video Conferencing for NRENs
- geteduroam will make eduroam easier for institution admins and end users
- perfSONAR 5.0