Is It Going To Rain Tomorrow? A Quick Introduction To Weather Forecasts

Is it going to rain tomorrow?

Smartphones made answering such questions a very trivial task. While this is how most people interact with weather forecasts, I have never thought about how those forecasts were made in the first place. With all the press and hype around Artificial Intelligence and Machine Learning, one might assume that weather forecasts should rely on some sort of AI to predict the weather. However, I was very surprised to find out that artificial intelligence techniques are barely or never used by the official forecasting stations, which makes me wonder: how do they do it?

From 1922 to the present

Forecasting the weather has a long history, but the methods used today (using calculations) date back to the 1920s when a British scientist L.F. Richardson published his book titled Weather Prediction by Numerical Process. He basically showed how differential equations describing atmospheric motions could be approximated by dynamical equations such as:

Future value = Initial value + Change

Such formulation allowed for using the actual measurements and then extrapolating the values of those changes into the future in small increments of time by repeating the process over and over. Richardson himself tried to forecast the weather based on those equations but his predictions were far off due to the poor data quality. With the development of computers and better measurements, predictions based on Richardson’s work have improved.

Cubic grids used in many forecasting models today. Source

Fast-forward to the present day, we find that there are a variety of forecasting models such as the American model known as the GFS model and the European model known as the ECMWF model. Let’s start with some basic concepts first.

In a known data representation schema called the grid representation, the atmosphere is divided into cubic grids (see the image above), then the dynamic equations are used to calculate the flow and values of a number of variables inside those cubic grids such as temperature, wind, pressure, humidity, precipitation, and many others. Despite the way data are represented, those equations only approximate the physical processes and laws in the atmosphere. Once we get the initial measurements of the weather variables using some dedicated data collection systems (see below), those models are then used to extrapolate the values into the future.

The equations that describe the physics of the atmosphere. I am putting this image here just in case you are wondering about what those equations I am talking about. Are you happy now?

The core element that is shared between those models is the observation data. Petabytes of data are collected every day to make weather forecasts. The vast majority of those data come from satellite observations (95%). Other sources come from the surface observations from weather stations, the weather balloons and radars, and also from the special sensors attached to many commercial airplanes among many other sources.

Model resolution and local models

Different spatial resolutions: 2.5km (left), 7.5km (middle) and 80km (right). Source.

Models differ in their resolutions. When you think about the natural geographical features, you might see that some areas are more homogenous than others. Having a low-resolution model (with large grids) is always fine in those cases. The problem comes when those large grids are used in areas where weather dynamics and variables are not constant which introduces a big source of errors as the initial observations are not set correctly — this is the reason why sometimes rain or thunderstorms are not always accurately predicted in advance. “Well, why not just use the smallest possible grid size?”, you might ask. The answer lies in the limits of computational resources we have. Small grid sizes also mean more computations and more time to come up with the next forecast. The American model has a lower resolution (grid centers are 13km apart) while the European model has a higher resolution (grid centers are 9km apart).

But this is not the end of the story — there are regional/regional models that work on a smaller scale on a level of specific regions. Actually, many of the advanced countries have their own local models to benefit from the smaller resolution scale.

Some examples of local weather models along with their resolutions in km. Each of those models serves a specific region or a country. Check the source for details. Source.

Computational time

Forecasts are pointless if not provided frequently enough. However, computations take some considerable time and there is a very important trade-off between model complexity and forecast frequency (speed).

European model only provides forecasts 2 times a day each taking 6-hour to calculate while the American model provides forecasts 4 times a day each taking 3.5-hour calculations. Both models provide forecasts for 10-30+ days forecasts into the future.

As you can imagine, all models become less accurate as you predict the further future for the chaotic nature of the weather and the so-called butterfly effect: small perturbations lead to significant changes.

The Model Wars

How accurate are those forecasts? It may be hard and beyond the scope of this post to fully answer this question. However, I will only highlight a well-known comparison between GFS vs ECMWF models, or between the American and the European model (although other models/methods may be superior to both).

America vs. Europe. I don’t know who won but the models offer very different predictions. Source.

Most of the time, those differences are not crucial. Sometimes, however, they can be the difference between life and death. The European model is praised for its ability to see Hurricane Sandy 8 days in advance while the American model predicted that it would divert to the ocean.

The diverging forecasts of Hurricane Sandy 5 days before it finally hit the east coast. The European model -the red line- predicted the correct course while the American model -the blue line- was off, way off… WAAAY OFF. Source.

Hurricane Sandy ended up the fifth-costliest hurricane in the U.S. history with 233 fatalities and $68.7 billion worth of damage. The American model, however, is actually better at some other areas than the European model such as predicting Hurricane Dorian in 2019 before the European model did. Those weaving comparisons between models, however, seem very unhealthy to me as they distract from the reasons why a model predicts something better than other models.

Most importantly, a new class of methods has been increasingly adopted in weather forecasting: model ensembls. The idea is simple: instead of relying on a single prediction for a very chaotic process, make slightly different forecasts or different models and then develop a way to calibrate those into a single and representative model. The idea of model ensembling is very popular in the community of data scientists and it is probably the closest point of interaction between data science/statistical modeling and weather forecasting models. The good news is that the ensemble methods seem to give more accurate predictions.

However good and accurate model predictions are, it all seems to fall when it comes to extreme weather. Meteorologists think that extreme weather is becoming harder to predict due to climate change and historical data may not be as useful. While those trends are definitely worrying, researchers are finding ways to tame the chaos and help governments make informed decisions.

The presentation in this post is meant to highlight the big points in weather forecasting and that meant having to skip a lot of necessary details. The differences between weather models extend beyond model resolution as models make different decisions on what data to collect, and how to combine data with model forecasts (known as data assimilation), and differences in the numerical methods used to calculate the variables of interest. Furthermore, the differences also stem from the purposes each of those models serve: some models are built to predict the global trends in climate, some are built to model the hurricanes, other models are meant to model the oceanic climate, and etc. Take a look at this partial list and you will be blown away by those models and it will make you wonder which model we should trust.

Despite what model we will eventually trust, at least we should appreciate the history, science, and engineering behind the answer to the very simple question: is it going to rain tomorrow?

Resources

Leave a Reply

Your email address will not be published. Required fields are marked *