Fri. Feb 3rd, 2023
The COP21 negotiations in Paris.
Enlarge / The COP21 negotiations in Paris.

Most of the information we get is not really new. Instead, it’s related to things we already know, meaning we need to update our beliefs based on the new information. You may not be surprised that not everyone is good at updating their beliefs. And a new study Nature climate change reports that there is a rather important group that seems to be bad at this process: climate negotiators.

There is good news. While uncertainty about climate change is generally seen as a challenge to policy making, highlighting the uncertainties helps negotiators update their beliefs with current information.

Uncertainty means different things in different contexts. We may not know for sure whether the planet will warm by 2.7 degrees Celsius or 3.5 degrees Celsius by 2100, but it’s quite likely it will be around that range. There is uncertainty, both in terms of our own carbon emissions and in terms of the climate’s sensitivity to them, but it is uncertainty within limits.

Conveying this type of uncertainty often plays an important role in communicating scientific information, and climate change is no exception. To investigate the effects of uncertainty, Valentina Bosetti, an economist at the University of Bocconi in Italy, led a group of researchers in conducting a field experiment during COP21 negotiations in Paris in 2015. Their subjects included 217 climate change negotiators and policymakers from more than 100 countries.

All participants were first asked about their current views on how climate change would evolve by the year 2100, assuming that global emissions would remain about the same. They were given a graph that showed four different possible outcomes: an increase of less than 2 degrees Celsius; from 2-3 degrees Celsius, from 3-4 degrees Celsius and more than 4 degrees Celsius. They had to show how likely they were to each outcome by circling a percentage on an annotated scale. For example, they might mark a rise of 3-4 degrees Celsius as 70 percent likely, which puts it in the range of “probable.” They could rate an increase of less than 2 degrees Celsius as 10 percent likely, putting it in the range marked as “very unlikely.”

The participants were then shown data from the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report, which included data from 30 different models used to project the temperature increase that would occur with consistent emissions. Each of those models came up with a slightly different answer depending on what was taken into account in the model, but most of them clustered around 3 degrees Celsius, which was the average of all estimates. Most estimates – 90 percent of them – fell between 2 degrees Celsius and 4 degrees Celsius. Only a few outliers fell at less than 2 degrees Celsius or more than 4 degrees Celsius.

This is where the study got smart. The 217 participants were divided into three groups, and each of them was shown the information from these models in a slightly different way. A third of them saw a very simple graph showing only the average of all model outcomes, with a horizontal line at 3 degrees Celsius. The graph also showed that 90 percent of the estimates fell between 2 degrees Celsius and 4 degrees Celsius.

The next third saw a slightly more complicated diagram. It looked essentially the same, but this setup also showed dots for the outliers that fell below 2 degrees Celsius or above 4 degrees Celsius. And the last third saw the same graph, but this time there was a point for each individual model, which meant it was possible to see how all the model estimates fell against each other.

Then all participants were given the first question again to see if their answers had changed. In general, they were quite conservative in updating their beliefs based on the information they had just seen. But they did much better when they saw the graph with all the information about uncertainty.

There are plenty of reasons why this could be the case: maybe the first two graphs didn’t provide enough information to really change the way participants thought about probability; maybe seeing the distribution of the model estimates really helped reinforce how confident we are about certain ranges. It’s also possible that all the additional information forced them to pay closer attention to what they were looking at.

The curious thing, however, is that the same effect not appear in a control group.

The control group was carefully chosen to meet some characteristics but not others. The researchers went to Erasmus University Rotterdam, where a group of MBA students held a two-day role play of climate negotiations and had been preparing for it for months. These students were much more knowledgeable about climate change than the average person, but they were not driven by actual national agendas or professional concerns. They also had less confidence in their knowledge than the climate negotiators and policymakers.

The students were much more willing to adjust their beliefs based on the evidence they received – and the different formats didn’t have the effect they had on the professionals. It is not clear why this is so. It may be because of the difference in confidence, or it may be because professional policymakers have to promote their national interests, which may be linked to a particular forecast.

This result raises many questions, the most obvious of which is why showing all uncertainty worked better with professionals than with students. It would also be worth testing whether the same effect can be seen in other areas of science and policy. But one of the most important things to come out of this research is the suggestion that different communication techniques will work with different groups of people. The authors write, “These results highlight the importance of testing visualization tools directly on the population of interest.”

Nature climate change2016. DOI: 10.1038/nclimate3208 (About DOIs).

By akfire1

Leave a Reply

Your email address will not be published.