Sat. Sep 24th, 2022
Cheerful pretty girl holding umbrella while walking outdoors.


A research team at Google has developed a deep neural network that can make fast, detailed rain forecasts.

The researchers say their results are a dramatic improvement over previous techniques in two important ways. One of them is speed. Google says that leading weather forecasting models today take one to three hours to run, making them useless if you want a forecast an hour into the future. By contrast, Google says its system can produce results in less than 10 minutes, including the time to collect data from sensors in the United States.

This fast turnaround time reflects one of the main advantages of neural networks. While such networks take a lot of time to train, it takes much less time and computing power to apply a neural network to new data.

A second advantage: higher spatial resolution. Google’s system divides the United States into 1 km squares on one side. By contrast, Google notes that in conventional systems, “computational requirements limit spatial resolution to about 5 kilometers.”

Put these together and you could have a forecasting system that is much more useful for short term decision making. For example, if you’re considering cycling, you can look up a minute-by-minute rain forecast for your particular route. Today’s conventional weather forecast, on the other hand, could tell you that there’s a 30 percent chance of precipitation in your city in the next few hours.

This animation compares a real weather pattern (center) with a conventional forecast (left) and Google's own forecast (right).  Google's prediction has significantly more detail in both time and space.

This animation compares a real weather pattern (center) with a conventional forecast (left) and Google’s own forecast (right). Google’s prediction has significantly more detail in both time and space.


Google says its forecasts are more accurate than conventional weather forecasts, at least for periods of less than six hours.

“On these short timescales, evolution is dominated by two physical processes: advection for the cloud movement, and convection for cloud formation, both of which are significantly affected by the local terrain and geography,” Google writes.

Outside of that, however, things start to falter. For longer periods of time, conventional physics-based modeling still yields more accurate predictions, Google admits.

How Google’s Neural Network Works

Interestingly, Google’s model is “physics-free”: it is not based on any a priori knowledge of atmospheric physics. The software does not attempt to simulate atmospheric variables such as pressure, temperature or humidity. Instead, it treats precipitation maps as images and attempts to predict the next few images in the series based on previous snapshots.

It does this using convolutional neural networks, the same technology that allows computers to correctly label images. You can read our deep dive on CNNs here.

In particular, it uses a popular neural network architecture called a U-Net, which was first developed for diagnosing medical images. The U-mesh has several layers that downsample an image from its original 256-by-256 form, producing a lower-resolution image where each “pixel” represents a larger area of ​​the original image. Google doesn’t explain the exact parameters, but a typical U-Net can convert a 256-by-256 grid to a 128-by-128 grid, then convert that to a 64-by-64 grid and finally a 32 – by-32 grid. As the number of pixels decreases, the number of ‘channels’ – variables that record data about each pixel – grows.

Experience has shown that this downsampling process helps a neural network identify features of a high-level image. Values ​​within a neural network are never easy to interpret explicitly, but this 32 by 32 pixel grid can implicitly capture important variables such as temperature or wind speed in any part of the image.

The second half of the U-Net then upsamples this compact display – back to 64, 128, and finally 256-pixel displays. At each step, the network copies the data from the corresponding downsampling step. The practical effect is that the last layer of the network has both the original full-resolution image and summary data reflecting high-level functions inferred by the neural network.

To make a weather forecast, the network takes an hour’s worth of previous precipitation maps as input. Each card is a “channel” in the input image, just as a conventional image has red, blue, and green channels. The network then attempts to run a series of precipitation maps that show precipitation over the next hour.

Like any neural network, this one is trained with real world examples. Thousands of real weather patterns from the past are fed into the network, and the training software adjusts the network’s many parameters to better approximate the correct results for each training example. After repeating this process millions of times, the network gets pretty good at approximating future rainfall patterns for data it hasn’t seen before.

By akfire1

Leave a Reply

Your email address will not be published.