Technology challenges in exascale supercomputing
SpeakerAbdulrahman Azab
TrackTrack 2
SessionHPC
DescriptionExascale computing is a measure of computer performance and refers to systems being able to perform at least an exaFLOP. In June 2022, the world's first public exascale computer, Frontier, became the world’s fastest supercomputer
Exascale/pre-exascale computing should “not” be thought of just as a huge number of floating point units, so that HPC centers can compete on who has the largest supercomputer in “size”. If an exascale “sized” supercomputer is not able to run an exascale “application”, does this actually make it an exascale supercomputer? or it can be simply described as a set of smaller supercomputers located in one room?
There are several technical challenges in the way to actually reach exascale computing. One is how to actually develop exascale applications i.e. with billion-way parallelism == 1 billion floating point units performing 1 billion calculations per second. Do such applications actually exist, and if not then what are the possibilities for existing large-scale applications being capable of reaching this in the future
A second challenge is handling power consumption for exascale computing. A theoretical analysis by the Exascale Study Group showed that with the traditional technologies, a 1-exaflop system may consume more than 600 megawatts.
A third challenge is the “memory wall” challenge. If exascale/pre-exascale systems are supposed to be the “fastest”, how can we manage the time and energy required to move data from memory into the compute units, and from the compute units out to storage, not to be larger than the time and energy required to perform a floating-point operation on that data.
The presentation will address the above, in addition to other general and a few system-specific challenges and how they are currently handled in Frontier and EuroHPC petascale systems
Exascale/pre-exascale computing should “not” be thought of just as a huge number of floating point units, so that HPC centers can compete on who has the largest supercomputer in “size”. If an exascale “sized” supercomputer is not able to run an exascale “application”, does this actually make it an exascale supercomputer? or it can be simply described as a set of smaller supercomputers located in one room?
There are several technical challenges in the way to actually reach exascale computing. One is how to actually develop exascale applications i.e. with billion-way parallelism == 1 billion floating point units performing 1 billion calculations per second. Do such applications actually exist, and if not then what are the possibilities for existing large-scale applications being capable of reaching this in the future
A second challenge is handling power consumption for exascale computing. A theoretical analysis by the Exascale Study Group showed that with the traditional technologies, a 1-exaflop system may consume more than 600 megawatts.
A third challenge is the “memory wall” challenge. If exascale/pre-exascale systems are supposed to be the “fastest”, how can we manage the time and energy required to move data from memory into the compute units, and from the compute units out to storage, not to be larger than the time and energy required to perform a floating-point operation on that data.
The presentation will address the above, in addition to other general and a few system-specific challenges and how they are currently handled in Frontier and EuroHPC petascale systems
Presentation documents
All talks
- Addressing the Skills Gap in Cybersecurity
- After 10 years of cloudification
- An invitation to Quantum networking
- Assurance!
- Automatic verification of MPLS networks
- Better learning environments for the students by working with standards and integrations?
- Borealis Crossing the North Pole. towards US and Asia
- CNaaS in Norway - lessons learned after three years of operations
- Cables as part of Arctic Ocean Observing System (AOOS)
- Campus Network as a Service in Sunet
- Can We Reach Exascale?
- Can we transition away from passwords?
- Closing Plenary Talk
- Cloud Strategy and Data Migration Efforts at UiT, The Arctic University of Norway
- Creating a national CERT: Adapting to scale
- Cyber Inspiration or Never Waste a Good Crisis
- Cybersecurity Collaboration - Bringing Together Colleges and Universities in Ontario, Canada
- Cybersecurity Risks Challenge Nordic NRENs on Security Governance
- Development of the SMART Repeater Sensor System
- EDSSI European digital student service infrastructure
- Education and TF-EDU
- Fiber Optic Sensing in the Arctic Utilizing DAS (Distributed Acoustic Sensing) Technology
- Fiber optic time transfer from UTC (SP) to a VLBI location utilizing Sunet
- Funet Campus Network as a Service - First implementations
- Get ready for the 800 GE reality
- Goodbye Iceland - Hello Norway
- How to Mitigate a Global Supply Crisis
- ISO 27K certification for the University of Iceland
- InAcademia, some Swedish insights
- Introduction to SeQUeNCe, a Customizable Discrete-Event Simulator of Quantum Networks
- It's all about Communications
- Large Scale Distributed Sensing Infrastructures for Earth Observation
- Machine Learning Foundations for Network Operations
- Microdep and the Zero Outage Vision
- Modernising Funet Network Monitoring
- Moving North: Arctic Development in Times of Geopolitical Changes
- Network technology needs for supporting genomics data managent in Europe
- OCRE - Driving Innovation through the adoption of commercial Cloud and EO services
- Official Welcome to Iceland
- Opening Keynote: How HPC contributes to Victories in Professional Cycling
- Operating the NORDUnet Next Generation Network - Challenges and Where to Next?
- Parallel and Scalable Machine Learning
- QKD and Quantum Communication activities in NRENs
- Quantum Computing
- Quantum Internet Alliance: Towards a pan-European
- Quantum Network Testbed
- SSH certificates for a federated world
- SURF Automation
- SciStream: Architecture and Toolkit for Data Streaming between Federated Science Instrument
- SeamlessAccess: Value for researchers, SP's, iDP's and federations
- Securing Research Infrastructure
- Securing identity services on Linux
- Self sovereign identity use cases
- Short Tips to Using a PMP Service
- Software complexity is bad for security
- Technology challenges in exascale supercomputing
- The Cinia/Far North Digital Cable through the North West Passage
- The ESnet6 Approach to Network Orchestration and Automation
- The GÉANT Community eHealth Task Force wants you !
- The Importance of Preparedness: Keys to Successful Business Continuity Planning
- The NEA3R Collaboration: Expanding Support for Global Science
- The Network Automation eAcademy
- The economic value of submarine cables in the Arctic
- The future Eco-system of our Digital learning and NORDUnet
- The needs of science and society for time and frequency services over optical-fibre networks in Europe
- Threat Intel
- Time and frequency distribution in the GÉANT network
- TimeMap - tool for latency and jitter monitoring
- Welcome from the host
- When you hear hoofbeats think Unicorns not Horses: Unique Service Creation in a Commodity World
- Words from the CEO
- eduMEET - secure, private and affordable Video Conferencing for NRENs
- geteduroam will make eduroam easier for institution admins and end users
- perfSONAR 5.0