PNNL

PNNL advancing the frontiers of computing: the march toward exascale

Researcher Mahantesh Halappanavar and his colleagues on the Exascale Computing Project are developing methods and techniques to efficiently implement key combinatorial algorithms.
Researcher Mahantesh Halappanavar and his colleagues on the Exascale Computing Project are developing methods and techniques to efficiently implement key combinatorial algorithms.

We often hear how supercomputers are used to tackle incredibly complex problems. They have played an essential role in the design of automobiles and aircraft, including the Boeing 777 Dreamliner, the creation of new anti-cancer drugs and discoveries related to the origin of the universe. Despite these amazing successes, and the astonishing advancements in computing over the past several decades, there are still limits to what today’s most powerful computers can do.

Computational scientists at the Department of Energy’s Pacific Northwest National Laboratory are collaborating with colleagues from around the world, in government, academia and industry, to overcome these limits. They are working together, under leadership from DOE, to take supercomputers to the next level of performance. PNNL researchers are among the experts helping design tomorrow’s supercomputers and write the software codes that will run on them.

This national quest is called the Exascale Computing Project and it seeks to deliver an “exascale-capable” computer by 2021. Exascale means that the computer can perform one quintillion calculations per second — that’s a “1” with 18 zeroes behind it. This computer would be 10 times faster than the current Chinese record holder.

To grasp the difficulty of this task, suppose that we gave every person in the United States a PC and asked them to harness the collective power of those 300 million machines to solve a single problem. If they could do this, they would be operating at the exascale. But how would you program so many machines? How would it store and share results? And how would it be powered? The team is focused on answering these questions — as well as ensuring the resulting system is reliable and efficient.

To grasp the difficulty of this task, suppose that we gave every person in the United States a PC and asked them to harness the collective power of those 300 million machines to solve a single problem. If they could do this, they would be operating at the exascale.

Researchers are looking at how to redesign and reinvent the hardware, system software and applications that would be used for an exascale computer. This requires a testbed for systems evaluation. PNNL is taking the lead here by providing a first-of-its-kind computing proving ground, much like test tracks for automobiles. It provides access to advanced instruments that measure performance, power, reliability and thermal effects. It also provides modeling and simulation tools that assess individual technologies as well as the impact of combining them into a complete system. Results are broadly shared within the high-performance computing community.

PNNL researchers also are involved in developing applications for these future computing systems. It might seem a bit like the chicken-and-egg situation, but code development and system development are being done in parallel so they will be ready to work together as expected and desired.

For example, researchers are enhancing PNNL’s world-leading computational chemistry capability to take full advantage of exascale computing technologies. This will allow researchers to design new drugs, develop new energy sources and invent new materials.

In another project, PNNL power engineers and mathematicians are creating a code that will simulate the electric grid. This code will be used to improve grid operations, manage fluctuating renewable power sources and avoid costly blackouts.

At the same time, PNNL computer scientists are advancing the frontiers of machine learning to glean insight from supercomputer simulations. The computer “learns” on its own, without explicit programming, to find patterns of interest in the massive output generated by the supercomputer. This speeds discovery and frees the human for other tasks.

We often take technological innovation for granted, expecting each new laptop or cell phone to be faster than the last one. But the leap to exascale will require revolutionary — not evolutionary — advances in computing. The national labs are up to this challenge, and when we deliver, it will improve our nation’s economic competitiveness, advance scientific discovery and strengthen our national security.

  Comments