Sat. Feb 4th, 2023
How well have climate models done in the upper atmosphere?

If people who dismiss climate science ever point to factual data, you can almost bet that it is data from satellite measurements of temperatures in the upper atmosphere. At least until the record-breaking global heat in 2015 and 2016, some satellite data was amenable to the claim that global warming had magically ended in 1998.

That was always nonsense, choosing a starting year and ignoring ongoing corrections to the complex satellite measurements. That said, it’s certainly fair to compare the satellite records to climate models to see what we can learn.

In the early 2000s, a succession of La Niña years temporarily kept global temperatures slightly below the long-term trend. Climate model projections prepared for the 2013 Intergovernmental Panel on Climate Change report, which predicted temperatures using futures scenarios from 2000 to 2005, were slightly above satellite data. Is that just because of the La Niña conditions in the Pacific Ocean, or are the models disabled in some way?

To find out, a group of researchers led by Ben Santer of Lawrence Livermore National Laboratory conducted a careful analysis of those models and several satellite records of the temperature in the upper troposphere, about 5 to 10 kilometers (3 to 6 miles) away. above the surface. .

Project

To understand this story, we need to understand how the IPCC’s climate model projections come about. Many different climate models were run under different climate “forcing” scenarios. Forcings are things like greenhouse gas emissions, the rate of volcanic eruptions and solar activity, all of which affect the total amount of energy entering and leaving the Earth’s climate system. Each individual model run contains simulated natural variability from year to year, but the average of all model runs together leaves a smooth line. Even though there were a few hundred simulations with a strong El Niño in 2016, these are canceled out in that year by, for example, a few hundred simulations with a La Niña.

The smooth line is the “signal”, but at the cost of getting rid of the “noise” that makes each year unique. In other words, it is the long-term trend rather than a prediction of the exact global temperature for a given year.

Satellite temperature data in red, compared to the smooth average of model simulations in black.
Enlarge / Satellite temperature data in red, compared to the smooth average of model simulations in black.

Since real-world temperatures are like a single model simulation, no one should expect the wobbly data to perfectly match a smooth projection line (not that this stops some people). So to compare the models with real-world data, you have to add up several years. To avoid cherry picking start and end points, the researchers used moving averages of multiple lengths. They calculated 10-year average trends by moving the decade forward by one year, and they went as long as 18-year averages.

That comparison showed no real difference in the 1980s and 1990s, but showed a small but significant gap in the 2000s. To find out why, the researchers looked carefully at that gap. If it were the fault of natural variability, the satellite data should jump above the model line about as often as it falls below it. There’s also no reason to make the second half of the time period look different from the first half.

But the model mean is warmer than satellite data much more often than it is cooler, suggesting the difference isn’t random. But this only applies to the second half of the equation. So, the researchers calculate, there’s less than a 10 percent chance that the mismatch is due to natural variability alone. The model average has risen somewhat in recent years.

Blame the volcanoes and the sun

Does this mean that the problem is that the models are too sensitive to CO2, simulating too much warming in the upper troposphere? The researchers find no evidence for this. First of all, overly sensitive models would have been hot in the eighties and nineties as well. And large eruptions like El Chicon in 1982 and Pinatubo in 1991 provide their own tests — the models didn’t overreact to the short-term cooling impact of those events. Finally, if you break down the individual climate models, the mismatch is not greater in the models with stronger CO2 sensitivity.

Instead, the researchers say the best explanation is a bit of natural variability plus a problem we already know about — some of the natural force scenarios (volcanoes, solar) used in those simulations have so far guessed wrong. Volcanoes have thrown up some more sunlight-reflecting sulfur and solar activity has been a bit quieter — neither of which could have been predicted in advance. Put the two together and you get a slight cooling effect compared to the model projections for this period. Correcting these inputs has been shown to improve the match to surface temperatures, and the same would be true for the upper troposphere tracked by the satellite measurements.

The gray band is the surface temperature projection by models used for the latest IPCC report.  The dotted lines show how that would change with precise volcanic and solar inputs.  Observed surface temperature data shown in colors.
Enlarge / The gray band is the surface temperature projection by models used for the latest IPCC report. The dotted lines show how that would change with precise volcanic and solar inputs. Observed surface temperature data shown in colors.

So ultimately the discrepancy for these upper air temperatures is real, but the reason for that is pretty crazy — and doesn’t change the amount of warming we expect to see as greenhouse gas emissions continue. As the researchers gently remind us, “While the scientific debate about the cause of short-term differences between modeled and observed rates of warming is likely to continue, this debate does not cast doubt on the reality of long-term anthropogenic warming.”

Natural Geosciences2017. DOI: 10.1038/NGEO2973 (About DOIs).

By akfire1

Leave a Reply

Your email address will not be published.