Sat. Oct 1st, 2022
Highly stylized map of the US East Coast.
enlarge Ocean temperatures between 20 and 30°C before Hurricane Florence made landfall in North Carolina. (The dotted white line shows the threshold of temperatures warm enough to support a hurricane, around 28°C.)

It was the first scientific attempt of its kind to assess the impact of climate change on a hurricane before the storm had even made landfall. And the results (which we covered at the time) were remarkable, suggesting that in 2018 Hurricane Florence would drop 50 percent more rain and span 80 kilometers more due to a warmer world.

More rainfall would hardly come as a surprise. Results from many previous tropical cyclones have shown that a warmer atmosphere, which holds more moisture, is expected to increase the storm’s precipitation totals. But 50 percent would be exceptional, as previous surveys had dropped between 6 and 38 percent, depending on the storm.

The scientists couldn’t explain why they got that high number at the time, given that they only had a few days to run the model prediction simulations and get them out the door. With the advantage of time, the scientists have now published a review of their pioneering efforts. Unfortunately, it shows that mistakes were made.

The initial work was based on 10 simulations of two versions of the world each: the actual conditions at the time and a counterfactual world in which the warming trend had been removed (in this case, a 0.75 °C decrease in the temperature of the ocean surface in the area ). The difference between these “actual” and “counterfactual” runs was the influence attributed to climate change, with the differences between the 10 runs producing some error bars.

To revisit this, the researchers repeated the experiment, but with 100 simulations for each scenario. Sets of repeated simulations, called “ensembles”, are done by varying some of the uncertain parameters in the model. The more combinations of parameters you have, the more you fill in the range of possible outcomes. This reinforces the error bars and ensures that you don’t miss any part of what the model predicts.

This allowed the researchers to compare the model prediction scenarios with what actually happened when Hurricane Florence dumped torrential rains on the Carolinas in September 2018. The “actual” forecast simulations did their job and matched the landing timing and location. The precipitation forecast was also good, with a maximum precipitation total averaging 85.3 centimeters (33.6 in), compared to the 82.3 centimeters (32.9 in) measured in the real world.

However, the researchers discovered a problem with the way their “counterfactual” simulations were originally set up. An error caused their target ocean surface cooling of 0.75 °C to increase an additional 1-3 °C over the Carolinas. That made for a much bigger contrast to today’s world, and it turns out that’s why the numbers they released in 2018 seemed so extreme.

After fixing that flaw, their 100 “counterfactual” simulations show a much smaller impact of climate change. Instead of some 50 percent of rainfall being the result of a warmer world, the models actually show it: five percent (which is ±5). And instead of a storm that is 80 kilometers wider because of climate change, it was about nine kilometers (±6) wider.

Clear “Oops!” aside, there’s one more thing the researchers learned from this analysis. To test the impact of just 10 simulations instead of 100, they ran the numbers on many random sets of 10. While the means were clearly similar, the error bars on a set of 10 are much wider.

For example, the 95 percent confidence range for storm size due to climate change with all simulations is 3.1 to 15.3 kilometers. Using 10 simulations, that range grows from -8.6 kilometers to 28.5 kilometers (that is, some would predict the storm would actually be smaller). So in this case, not having enough time to run more simulations means you’re stuck with uncomfortably large error bars.

The researchers point out that every situation is a little different, and it’s not as easy as saying that X simulations needed. More samples may be needed to work out a recommended approach for these ultra-fast assessments.

They also put a somewhat surprisingly happy face on their results. The researchers write:

We showed that a predicted attribution analysis using a conditional attribution framework makes it possible to communicate credibly based on sound scientific evidence. Post-event expansion of the ensemble size and analysis showed it to be reasonable, albeit with some quantitative modification of best estimates and the ability to more rigorously evaluate the significance of the analysis.

After all, the big mistake here was avoidable, though more likely in the rush. And while the error bars would be large, the method can at least say something interesting. Whether it is enough to get a less reliable answer faster is another question.

Science Advances, 2020. DOI: 10.1126/sciaadv.aaw9253 (About DOIs).

List image by NASA EO

By akfire1

Leave a Reply

Your email address will not be published.