New MIT climate model uses machine learning to make better spatial predictions.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Climate models are a key technology in predicting the effects of climate change. By running simulations of Earth's climate, scientists and policymakers can predict conditions such as sea-level rise, flooding, and rising temperatures, and make decisions about how to respond appropriately. . But current climate models struggle to provide this information quickly or cheaply enough to be useful at small scales, such as the size of a city.

Now, the authors of a new open access paper published in Journal of Advances in Modeling Earth Systems have found a way to leverage machine learning to harness the benefits of existing climate models, while reducing the computational costs required to run them.

“It turns conventional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS), who co-authored the paper with EAPS postdoc Anamitra Saha.

Conventional wisdom

In climate modeling, downscaling is the process of using a global climate model with a coarser resolution to produce finer detail over smaller regions. Imagine a digital image: A global model is a large image of the world with a small number of pixels. To scale down, you zoom in on just the part of the image you want to see — Boston, for example. But since the original image was low resolution, the new version is blurry. It doesn't give enough detail to be particularly useful.

“If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling tries to add this information back in by filling in the missing pixels. “Information can be added in two ways: either it can come from theory, or it can come from data.”

Want more breaking news?

Membership Technology NetworksThe Daily Newsletter delivers breaking science news straight to your inbox every day.

Subscribe for free.

Traditional downscaling often involves the use of models based on physics (such as wind upwelling, cooling and condensation processes, or the topography of an area) and supplementing it with statistical data taken from historical observations. to do But this method is computationally taxing: it takes a lot of time and computing power to run, while it's also expensive.

A bit of both

In their new paper, Saha and Ravela have found a way to incorporate the data in another way. They have used a technique in machine learning called adversarial learning. It uses two machines: one prepares the data to go into our image. But another machine makes a decision by comparing the sample with the actual data. If it thinks the image is fake, the first machine has to try again until it convinces the second machine. The ultimate goal of the process is to create super-resolution data.

Using machine learning techniques in climate modeling is not a new idea like adversarial learning. Where it currently struggles is in its inability to handle large amounts of fundamental physics, such as conservation laws. The researchers discovered that simplifying the physics and supplementing it with statistics from historical data was enough to produce the results they wanted.

“If you augment machine learning with some information from both statistics and simple physics, all of a sudden, it's magical,” says Ravella. He and Saha began estimating extreme precipitation amounts by removing the more complex physical equations and focusing on water vapor and land topography. They then developed generalized rainfall models for hilly Denver and flat Chicago alike, applying historical accounts to correct the output. “It gives us, at a very low cost, just like physics. And it's giving us the same speed as statistics, but at much higher resolution.

Another unexpected benefit of the results was that very little training data was needed. “The reality is that just a little bit of physics and a little bit of statistics was enough to improve the performance of ML. [machine learning] The model … was actually not clear from the beginning,” says Saha. It takes only a few hours to train, and can produce results in minutes, an improvement over the months other models take to run.

Quantifying risk quickly

Being able to run models quickly and often is a critical requirement for stakeholders such as insurance companies and local policy makers. Ravela gives the example of Bangladesh: by looking at how extreme weather events will affect the country, decisions can be made about what crops should be grown or where the population should move, conditions and non- Certainty can be done as quickly as possible, taking into account a very wide range of circumstances.

“We cannot wait months or years to correct this vulnerability,” he says. “You need to look further into the future and at a larger number of uncertainties to say what might be a good decision.”

While the current model only looks at extreme rainfall, training it to evaluate other important events, such as tropical cyclones, winds and temperature, is the next step in the project. With a more robust model, Ravela hopes to apply it to other locations such as Boston and Puerto Rico as part of the Climate Grand Challenges project.

“We're very excited about the methodology we've put together, as well as the potential applications it could lead to,” he says.

Reference: Saha A, Ravela S. Anti-physical learning from data and models for reducing rainfall extremes. J Adv Model Earth Syst. 2024;16(6):e2023MS003860. doi: 10.1029/2023MS003860

This article has been reprinted from the following material. Note: Content may be edited for length and content. For more information, please contact the referenced source. Our press release publication policy can be accessed here. Here.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment