Computers

Intel's next-gen supercomputer to usher in exascale era in 2021

Intel's next-gen supercomputer to usher in exascale era in 2021
Intel's new supercomputer, named Aurora, will be the first of a new generation of exascale systems when it boots up in 2021
Intel's new supercomputer, named Aurora, will be the first of a new generation of exascale systems when it boots up in 2021
View 2 Images
Intel's new supercomputer, named Aurora, will be the first of a new generation of exascale systems when it boots up in 2021
1/2
Intel's new supercomputer, named Aurora, will be the first of a new generation of exascale systems when it boots up in 2021
US Secretary of Energy, Rick Perry (left) and Intel CEO Bob Swan
2/2
US Secretary of Energy, Rick Perry (left) and Intel CEO Bob Swan

The next generation of supercomputers has an official start date. Intel and the US Department of Energy (DOE) are teaming up to deliver the world's first exascale supercomputer in 2021, giving a huge boost to many different fields of research. Named Aurora, the new system will be a thousand times more powerful than the petascale generation that began in 2008 and is still in wide use today.

Aurora will boast a performance of one exaflop, which is equal to one quintillion floating point operations per second. That will make it the first exascale supercomputer in the United States, and unless another competitor emerges from the shadows in the next two years, it's on track to be the first in the world.

By comparison, the most powerful supercomputer in operation today – Summit, located at Oak Ridge National Laboratory (ORNL) – runs at 200 petaflops. One exaflop is equal to 1,000 petaflops, making Aurora five times more powerful.

The supercomputer itself will be built using upcoming generations of Intel tech, as well as components from sub-contractor Cray, Inc. The base will be Cray's supercomputer system code-named Shasta, made up of 200 cabinets joined with the Slingshot interconnect and running a version of the Shasta software. Housed in this frame will be Intel's Xe compute architecture running its One API software, as well as future generations of Xeon Scalable processor and Optane DC Persistent Memory.

So what can all this power be used for? It's designed to run both artificial intelligence systems like deep learning and more traditional high-performance computing (HPC) at the same time, which can be applied to problems that require crunching huge amounts of data. That includes things like weather forecasting, cosmology, mapping the human brain, developing new materials and drug discovery.

"Achieving exascale is imperative, not only to better the scientific community, but also to better the lives of everyday Americans," says Rick Perry, US Secretary of Energy. "Aurora and the next generation of exascale supercomputers will apply HPC and AI technologies to areas such as cancer research, climate modeling and veterans' health treatments. The innovative advancements that will be made with exascale will have an incredibly significant impact on our society."

Aurora is explained in detail in the video below.

Source: Intel

U.S. Department of Energy and Intel to Deliver First Exascale Supercomputer

4 comments
4 comments
guzmanchinky
Deep Thought. The answer is simply 42! But seriously, are these machines getting closer and closer to true AI? And in another decade will this be seen as a dinosaur when the first quantum computer comes online with 1000x the power?
Daishi
@guzmanchinky It's an interesting thought. We don't really understand the brain yet. Cells used for memory are also used for computational power. People used to say humans only use 10% of their brain but that's widely understood to be a myth. Doing some googling I saw a 2002 wired article that estimatd the computational power of the brain at only 100 teraflops. Ray Kurzweil claimed in 2005 (in a book about singularity) that it was something like 10^16 or 10 petaflops. There is an article here (https://aiimpacts.org/brain-performance-in-flops/) that says 10^18 or 1 exaflop is a median estimate (this IBM supercomputer is the first exaflop machine) and it could be as high as 10^25 or 10 yottaflops. Matching the computational power of the brain is just one component though. Building the software to replicate one is likely an enormously complex task that would likely require machine leaning models we don't know how to create yet. Interestingly some of the stuff I found was from 2008 and earlier and we broke 1 petaflop for the first time in 2008 and this system represents a 1000 fold increase in performance. The performance of all 500 of the fastest supercomputers in 2008 was only 16.95 Pflop/s.
guzmanchinky
Daishi, thanks for the reply! Sometimes I think we are close and then sometimes machines astonish me with their inability to do even the simplest tasks. 'Tis a bold world we live in today...
Catweazle
Awesome, just the job for climate simulation!
It will give the wrong answer much quicker.