Wickedly Fast Frontier Supercomputer Ushers in the Next Age of Computing

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned the fastest on the planet in half a year. Top 500 list. Frontier has more than doubled the speed of its latest titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds in excess of a trillion-second calculations — a milestone computing chase for 14 years.

That’s a big number. So before we move on, it’s worth putting in more human terms.

Imagine giving everyone with 7.9 billion people on the planet a pencil and a list of simple arithmetic or multiplication problems. Now, ask everyone to solve one problem per second for four and a half years. By combining the mathematical skills of the earth’s population for half a decade, you have now solved more than five million problems.

Frontier can do the same job in a second, and continue it indefinitely. A thousand years’ worth of arithmetic from everyone on Earth would take Frontier just under four minutes.

This exciting activity is ushering in a new era known as scale computing.

The Age of Exascale

The number of slide operations, or simple math problems, a computer solves per second is indicated by FLOP / s or colloquially “failures”. Progress is tracked in multiples of a thousand: A thousand flops equals a kiloflop, a million flops equals a megaflop, and so on.

The ASCI Red supercomputer was the first record speeds of a trillion failuresor terrestrial flop, in 1997. (In particular, Xbox Series X game console now packs 12 teraflops.) Roadrunner first broke the petaflop barrier, a quadrillion fails, in 2008. Since then, the fastest computers have been measured in petaflops. Frontier is the first to officially notch speeds over exaflop – 1,102 exflop, to be exact – about 1,000 times faster than Roadrunner.

It is true that today’s supercomputers are much faster than older machines, but they still occupy entire rooms, with rows of cabinets lined with wires and chips. Limit, in particular, is Cray’s liquid-cooled system running 8.73 million AMD processing cores. In addition to being the fastest in the world, it is also the second most efficient — inferior only to a test system made up of one of its cabinets — with an estimate of 52.23 gigapixels / watt.

So, what’s the big deal?

Most supercomputers are funded, built, and operated by government agencies. They are used by scientists to model physical systems, such as the climate or structure of the universe, but also by the military for research into nuclear weapons.

Supercomputers are now tailored to run the latest algorithms in artificial intelligence as well. Indeed, a few years ago, Top500 added a new lower accuracy benchmark to measure supercomputing speed in AI applications. Under that brand, Fugaku shone brightly way back in 2020. The Fugaku system set the most recent record for machine learning at 2 exaflops. Frontier broke that record with AI speeds of 6.86 bursts.

As very large machine learning algorithms have emerged in recent years, private companies have begun to build their own machines together with governments. Microsoft and OpenAI made headlines in 2020 with a machine they claimed fifth fastest in the world. In January, Meta said his next RSC supercomputer would be fastest at AI in the world at 5 exaflops. (Looks like they’ll need a few more fries now to match Frontier.)

Border and other private supercomputers will allow machine learning algorithms to push the boundaries further. Today’s state-of-the-art algorithms boast hundreds of billions of parameters — or internal connections — but future systems are likely to grow into the billions.

So, whether it’s AI or modeling, Frontier will allow researchers to advance technology and make cutting-edge science with even more detail and faster.

Is Frontier Really the First Exascale Machine?

When exactly supercomputing first broke the exaflop barrier partly depends on how you define it and what is measured.

Folding @ Home, which is a distributed system consisting of a diverse set of volunteer laptops, broke an exhalation at the beginning of the pandemic. But according to Top500 co-founder Jack DongarraFolding @ Home is a special system that is “embarrassingly parallel” and works only on problems with pieces that can be solved completely independently.

More aptly, rumors flew last year that China even had two exalted supercomputers operating in secret. Researchers published some details about the machines in articles at the end of last year, but they have not yet been officially compared by Top500. One year IEEE Spectrum In an interview last December, Dongarra speculated that if scale machines exist in China, the government may avoid shedding light on them to avoid exciting geopolitical tensions that could result in the United States limiting key technology exports.

So, it is possible that China has defeated the United States to the fullest, but according to the Top500, a benchmark that the supercomputer field has been using to determine the best dog since the early 1990s, Frontier still gets the official nod.

Next: Zettascale?

It took about 12 years to go from scale to scale and another 14 to reach ex-scale. The next big leap forward may take as long or longer. The computer industry continues to do so steady progress on chipsbut the pace slowed and each step became more expensive. Moore’s Law is not deadbut it is not as stable as before.

For supercomputers, the challenge goes beyond raw computing power. It may seem like you should be able to scale any system to hit any benchmark you like: Just expand it. But scale requires efficiency as well, or energy requirements spiral out of control. It is also more difficult to write software to solve problems in parallel through ever-larger systems.

The next 1,000-fold jump, known as zettascale, will require innovations in chips, the systems linking them in supercomputers, and the software running on them. A team of Chinese researchers predicted that we will hit zett scale computing in 2035. But of course no one really knows for sure. Exascale, previously anticipated arrival of 2018 or 2020arrived a few years later.

What is more certain is that the hunger for greater computing power is unlikely to diminish. Consumer applications, such as self-driving cars and mixed reality, and research applications, such as modeling and artificial intelligence, will require faster, more efficient computers. If necessity is the mother of invention, you can expect ever faster computers for a while longer.

Image Credit: Oak Ridge National Laboratory (ORNL)

Leave a Reply

Your email address will not be published.