Dr. Hannah Christensen of the University of Oxford says that we are better at predicting the weather, because computers are getting faster and maths are smarter. In the article, she explains how meteorologists are beginning to use mathematical techniques, stochastic processes - long used in the financial industry.
Weather cycle: using stochastic methods, seven-day weather forecasts achieved the same quality as they were three days ago two decades ago.In 2017, the British Meteorological Bureau launched the £ 97 million new
Cray XC40 supercomputer . He seriously improved the accuracy and detail of weather forecasts.
How he does it? I study weather prediction in the
Department of Atmospheric, Oceanic, and Planetary Physics at Oxford University, and the problem with predictions is not only to use more computers - although this obviously helps - but also to use them in more sophisticated ways.
Let's take a look at history and see how it was done before, because over the past few decades, weather forecasting has changed a lot.
Until the 1960s, forecasts were based on the records of observations and the search for patterns in these records, any analogies. The idea was very simple. If you keep records of the weather long enough, then the meteorologist will have a (relatively) simple task - to search the records for the day when the atmosphere looks about the same as today, and to present the historical development of the atmosphere from that starting point as today's forecast for next week.
But it did not work properly. The reason for this was chaos, or the butterfly effect. The development of weather on the scale of days or weeks is very sensitive to small details of the state of the atmosphere, but these details may be too small to be detected using satellite data and meteorological probes.
The idea with analogies was a bad one, but the only option, since another method — using equations to create mathematical models — was impractical before electronic computers appeared.
The English mathematician
Lewis Fry Richardson was the first to use mathematical models to predict the weather during the First World War. But he faced a serious problem. To calculate the forecast for six hours in advance, it was necessary to manually solve partial differential equations - they took about six weeks to solve them, and the result was very inaccurate.
But the idea of Richardson turned out to be correct, and now it is mandatory applied in computer simulations of the atmosphere.
Modern weather forecast begins with mathematics - equations describing the evolution of the atmosphere:

First, we have
the Navier-Stokes equation - in fact, three equations describing the conservation of momentum in each of the three directions of the coordinate system. Here we took into account the rotation of the Earth, moving to the rotational reference system - the second term on the right side is responsible for the Coriolis force, and the third - for the centrifugal force.
The equation is especially difficult to solve, because in the
advective derivative D / Dt there are very unpleasant nonlinear terms hidden in u ).
Then we have the continuity equation. Everything that has flowed into the container should flow out of it, or the density inside the container should increase.
Thirdly, we have a thermodynamic energy equation, where Q is the diabatic heating rate. And finally, we have the equation of state of the atmosphere.
And what should we do about it?
The first step is the discretization of the equations of motion. We cannot calculate exactly how each small flurry of wind will twist, but this is actually not necessary. Therefore, we break the atmosphere into small parallelepipeds - in a weather simulator, they can be 10x10 km in size horizontally, and from a few hundred meters to several kilometers vertically. Inside each of the cubes, we consider the atmosphere constant, with one number indicating the average temperature, one indicating humidity, wind speed, and so on. And then it is clear what problem we have - what about the processes occurring on a smaller scale?
Such processes, for example, clouds, still play an important role in forecasts, so they must be taken into account. They not only affect the development of processes on a larger scale, but also describe important weather phenomena for us remaining on the ground - rain or strong gusts of wind.
We present these processes using approximate equations, or
parametrization schemes. These approximations and simplifications are a major source of errors in weather forecasts.
Ideally, make our containers as small as possible. And we must include in the description all the processes of small scale that we can imagine. And make these schemes as accurate as possible. But in the end you have to accept that a computer simulator will never be perfect. He will always remain just a simulator.
So instead of trying to do the impossible, and predicting what exactly the weather will be next Tuesday with 100% accuracy, would it not be more useful just to accept our limitations and issue a probable weather forecast for the next week?
Instead of predicting rain with 100% accuracy, we recognize the uncertainty of our forecasts - perhaps the probability of rain will be, for example, only 90%. To do this, we need to critically evaluate our simulator and determine where exactly the errors in the forecasts occur.
That is what I do in my research. I work with a new technique, a
stochastic parametrization scheme. It uses random numbers (which is what “stochastic” means) to represent the uncertainties introduced by our forecast due to unrecognized small-scale processes. Instead of calculating the most probable clouds over Oxford, for example, we calculate the effect of many different possible clouds on a large-scale weather pattern to see how this affects the forecast. In other words, now our parameterization schemes are probabilistic.
And now, instead of making one, the most likely forecast, a set of forecasts is made for the next week. It begins with different, but equally probable initial conditions, which we estimate based on atmospheric measurements. Each prediction also uses different random numbers for a stochastic parametrization scheme, denoting various probable effects occurring on small scales.
There is nothing new in using stochastic processes to represent uncertainty - for example, they are complete in financial modeling - but their use in weather forecasting is only gaining momentum, despite the fact that meteorologists are among the first to begin to describe chaotic systems.
An interesting feature was discovered - certain weather patterns are very easy to predict. Errors in measuring the initial conditions and while simplifying the model do not have a very strong effect on the future, and the predictions from our set remain quite close to each other.
A good example is a blocking anticyclone — a high-pressure weather system that lasts over Scandinavia for days and even weeks, sucking in cold air from the north and reflecting storms south of Britain. Extremely cold but sunny winter days? His job.
In other cases, uncertainty leads to strong discrepancies in forecasts for the next week, which means that the atmosphere is in a very unpredictable state. And this information is quite helpful! A great example of this is the notorious
great storm of 1987 . Michael Fish [a
well-known employee of the English Meteorological Office, who has spoken for many years with BBC weather forecasts / approx. trans. ] it is not to blame for the fact that the forecast did not come true - just that evening the atmosphere was in a very unpredictable state.
The great storm of 1987, predicted by modern probabilistic forecasting systems for 66 hours. At the top left - the results of observations, a system of extremely low pressure with very strong winds; to the right of it is the forecast with the highest probability; what Michael Fish would see. The remaining fifty options - equally likely predictions of the current probabilistic weather forecasting system - demonstrate a serious uncertainty of the results.Over time, our computers get bigger and better (as well as observations improve), and our forecasts improve.
The graph below shows the capabilities of the system issuing the “most likely” forecast made at the European Center for Medium-Range Weather Forecasts (ECMWF) in the city of Reading (I work with their computer simulations; their supercomputer is one of the largest in the country). It can be seen as the accuracy of forecasts increases over time. The seven-day forecast made today is exactly the same as the five-day forecast twenty years ago.

We can also measure the quality of our probabilistic forecasts - this is not a tricky attempt to evade responsibility (“Well, we said that sunny weather is only possible”). The reliability of probability distributions can be measured statistically, and we are actually seeing a rapid improvement in the quality of probabilistic forecasts over the past ten years - the 7-day forecast today is as good as it was three days ago 20 years ago.
The quality of probabilistic weather forecasts for the past two decades. Green - the forecast for 7 days, red - for 5 days, blue - for 3 days.But, ultimately, the problem of limiting computing power does not go anywhere. It is good when the meteorological bureau has a new supercomputer, but this simply raises the question of how to use additional resources.
It is impossible to be sure that the future will bring us, including the weather next week. But by acknowledging this, and trying to accurately assess the uncertainty of the predictions, we can give honest weather forecasts to the public, and people will decide for themselves how to use the additional information.