AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

The Petaflop pool: How pooling our HPC and AI resources can tackle the world’s biggest challenges

by Spencer Lamb, KAO Data
Article ImageHigh performance computing and artificial intelligence are rising to the challenges of Covid-19, but could the collective way we're using HPC and AI be aimed at other global issues, like climate change?

And do we need to change our thinking?

The speed at which the Covid-19 pandemic has swept through the modern, interconnected world has been unprecedented.

To fight the impact of the virus, researchers are turning to modern tools – including the most extensive collection of high performance computing (HPC) resources ever thrown at a single problem.

Throughout 2020 the HPC community has come together to collaborate in response to one of the worst times of crisis in the modern age, shrinking the time needed to solve complex questions surrounding the pandemic from weeks and months to hours.

In March, the US established the Covid-19 High Performance Computing Consortium, offering access to 600 petaflops of compute sourced from federal agencies, National Labs and tech vendors including IBM, AWS, Google, Microsoft and HPE.

In that same month, the EU launched Exscalate4CoV, bringing together supercomputing facilities from Italy, Spain and Germany, alongside large research centres, pharmaceutical companies and biological institutes from across Europe. The resulting infrastructure platform, which totals 120 Petaflops, was used to discover that a common osteoporosis drug, Raloxifene, could be an effective treatment for Covid-19 patients with mildly symptomatic infection.

Even regular PC users pitched in: Stanford University’s [email protected] project saw an influx of new users, at its peak, reaching 470 petaflops of performance in a distributed system that consisted of hundreds of thousand of home machines – twice the peak performance of Summit, the world’s second most powerful supercomputer.

Now HPC resources are used to run predictive modeling on infection rates, create AI-based tools for patient triage that are trained on thousands of real-life cases, and of course, search for a way to protect us against the virus.

Thanks in large part to this collaborative and focused application of HPC, Covid-19 will not remain a problem forever – sooner or later, a permanent solution will emerge. But what if we could apply the same focus, the same sense of urgency – and the same, collective compute power of the world’s HPC systems to other problems facing humanity as a species – problems like climate change which in the long-run will impact even more people – arguably the entire world’s population, and all upcoming generations?

Climate change is complex to model

HPC is an essential tool for monitoring and studying the planet’s climate: from weather forecasting to biosphere modeling and to tracking the evolution of natural resources, planet-scale simulations can help illustrate the dangers of climate change and future consequences like no other tool possibly could. Unfortunately, producing accurate simulations on a planetary scale is a complex, time-consuming and an expensive undertaking.

Say, you want to model the terrestrial biosphere in order to understand the interactions between vegetation and climate? That will be 2 million machine hours on CURIE, one of Europe’s fastest research supercomputers.

Need an accurate weather forecast for the entire world, sliced into 72 vertical layers, like NASA does? That will take 8,400 Xeon Haswell cores running round the clock, and internal network speeds of at least 56 Gigabits.

The UK government recently announced plans to spend £1.2 billion on a new supercomputer for the Met Office – expected to become the world’s most powerful weather and climate simulation system once it goes into operation. The first phase will deliver a six-fold increase in the agency’s compute capacity alone.

So with all this compute already focused on climate modeling, why is it so hard to simulate the weather? Frankly, it’s a matter of resolution – the amount of detail in any simulation is limited by the compute resources available to researchers – and thus, access to more HPC equals better-quality research data, more detailed analysis and more accurate predictions.

Weather forecasts, for example, need to model days in advance, whereas a climate simulation needs to estimate what could happen over several decades. This, however, is not the toughest challenge out there and with a similar level of collaboration to those now in bioinformatics, it’s achievable.

Today, one of the projects running on the aforementioned Summit supercomputer –over 200 Petaflops housed at the Oak Ridge National Laboratory (ORNL) in Tennessee – is dedicated to improving the earth system model (ESM). This is a climate simulation that models the movement of carbon through the earth system alongside data on plant ecology, land use, ocean chemistry and atmospheric CO2.

The primary reason ORNL wants a better quality simulation is that it could help establish long-term climate trends that shape food production, driving dramatic change throughout the industry.

So, what would happen if post Covid-19, we pooled our HPC resources across North America and Europe again and aimed them at climate change conundrum? Could micro-level, meter-by-meter impacts be animated? Could new inventions and innovations be realized? Could we get to the point where we convince the flat-earth brigade that catastrophic climate change is indeed happening?

Obviously it’s hard to provide an answer for any of those questions when we’re looking at it hypothetically, but to provide a comparison – in February of this year, NOAA proudly announced they were tripling their operational weather and climate supercomputing capacity to 40 petaflops.

As impressive as that is, less than 6% of the HPC consortium petaflops currently being used to tackle Covid-19. Surely with a problem as complex and as all-consuming as global warming we should be putting our foot harder on the pedal?

At the moment the world’s focus is rightly on the immediate threat of Covid-19, and one of the outcomes from the fight against the disease is rediscovering the value of global collaboration and how with a laser-sharp focus on a singular issue, supercomputing can deliver incredible outcomes.

As things progress over the coming months, let’s hope that this is a lesson we can apply in other areas. Once the battle against the pandemic is won, maybe it’s time to bring climate change to the top of the global HPC petaflop agenda.


Spencer Lamb is VP of Sales and Marketing at KAO Data, a data center campus located near London.

 

Practitioner Portal - for AI practitioners

Story

MLOps startup Verta gets $10m in funding, launches first product

9/1/2020

The company plans to commercialize open source ModelDB project, developed by CEO Manasi Vartak

Story

AI and analytics services: Capabilities and costs

8/27/2020

Which skills do you need in your team? What are the costs for running the service? How can you optimize them? These are three key questions when setting-up and running an AI and analytics service.

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

Archived Webinars

More Webinars
AI Knowledge Hub

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up