OAK RIDGE, TN – Summit, the world’s largest supercomputer, has claimed the world record for the fastest machine-learning processing for a climate change research project run by the US government, it was reported last week.
Summit, which occupies an area equivalent to two tennis courts, utilised more than 27,000 GPUs to run a deep learning algorithms at a rate of a billion billion operations per second, also known as an exaflop. Government scientists trained Summit’s algorithms to tackle climate change by detecting weather patterns like cyclones from climate simulations used to generate three-hour atmospheric forecasts. The project aims to show how large-scale AI could improve on century-long climate predictions and pull key insights.
“Imagine you have a YouTube movie that runs for 100 years. There’s no way to find all the cats and dogs in it by hand,” says Prabhat of Lawrence Berkeley. The software typically used to automate this process is limited, whereas the Summit case study shows that machine learning can perform better and even help predict storm impacts such as flooding or property damage.
The project also demonstrates that additional computing power will ultimately be necessary for machine learning to fuel future breakthroughs. “We didn’t know until we did it that it could be done at this scale,” says Rajat Monga, an engineering director at Google.
Along with a number of other Googlers, Monga adapted the company’s open-source TensorFlow machine-learning software to Summit’s giant scale. He says that working to adapt TensorFlow to this scale will inform Google’s later efforts to expand its internal AI system. Ultimately, the project demonstrates the value of deep learning in the fight against climate crisis – and the challenges ahead for AI developers.