
Using social media is a double-edged sword. On the one hand, it can connect us with a lot more people than we would otherwise interact with, which is great. But our choices about who we communicate with (often reinforced by a platform’s algorithms) constrain many of our social networks in a way that puts us in an echo chamber of people who think like us. And in that mode, our social interaction can exacerbate tribes’ attitudes toward people outside our groups rather than break down barriers.
For people studying partisan divisions on various topics, there’s a lot going on here. When people work in a group to understand information, everyone generally benefits. But the opposite may be true if we instead retreat to our mental fortresses, man the catapults and prepare the boiling oil. So for a real problem like the partisan divisions over climate change, how can we tell whether social networks help or hurt?
Douglas Guilbeault, Joshua Becker and Damon Centola of the University of Pennsylvania wanted to design an experiment that could test whether simple signals turn off our mental defense mechanisms. Instead of gathering a group of people in a room — where you could measure each other in ways the researchers couldn’t control — they created a web interface used by 2,400 people recruited through Amazon’s Mechanical Turk service.
Sowing Social Networks
The participants were divided into groups of 40. Within those groups, the researchers seeded social networks, with the same number of liberals and conservatives in each network. Control groups only had people of the same political affiliation.
Everyone was shown a graph of NASA Arctic sea ice coverage data between 1978 and 2013 and was asked to forecast until 2025. called “endpoint bias”—looking only at the last few data points rather than the long-term trend.
After giving an initial forecast, everyone was given two chances to revise their answer before it became final. The four experimental groups differed in this respect. In the control group, there was no interaction at all between the participants – an antisocial network. But in the other three groups, individuals were arranged in a sort of grid, and each person was given the average estimate of their four “neighbors” in the network as they thought about their revision. The idea is that if you answered that sea ice was tending to expand in 2025 and your neighbors estimate a decrease, you might reconsider.
For one of the three experimental groups, subjects received only the average number of their network neighbors – nothing else. But another group was also shown their neighbors’ usernames and political affiliations. A conservative participant might think differently about his neighbors estimate if they are all liberal, for example. The latter group didn’t see any personal information about their neighbors, but they did see a few logos next to the estimate — one from the Democratic Party and one from the Republican Party. In the past, this simple visual cue has been shown to be enough to raise the tribal alarm, effectively reminding you that this information can be politically controversial.

In the control group, both conservatives and liberals slightly improved their responses—noting the downward trend correctly—simply because they had a few chances to question themselves. But the group that got estimates from neighbors without knowing their politics improved their answers significantly on their third and final attempt. While conservatives were significantly more likely to get the wrong answer on their first try, this party gap disappeared towards the end. Simply put, comparing notes helped more people get the right answer, regardless of their political druthers.
partisan influences
The other two groups did not do so well. Telling participants whether their neighbors were conservative or liberal kept the party divide alive – conservatives now only outperformed their counterparts in the control group. But surprisingly, the simple act of slapping donkey and elephant logos onto the screen had the most adverse effect. The results of both conservatives and liberals were indistinguishable from the control group. Comparing notes did nothing.

It’s not immediately clear why party logos were more harmful than party name tags. (Maybe it’s not so bad to worry that your neighbors were three liberals and one conservative all your neighbors might be liberals?) But the researchers say the general conclusion of the experiment is clear: dual networks can break down barriers, but any reminder that an issue is “political” can spoil the whole thing.
That makes sense given the importance of cultural identity. The stakes aren’t too high if you’re asked to evaluate a chart you may not have seen before, but the risk of being viewed as treacherous by your friends and family is high. Of course, the task of interpreting graphs in this experiment isn’t a perfect substitute for all conversations about climate science, but it does involve interactions with specific scientific information, such as encountering NASA tweets, for example.
Tribal partisanship in the US is a bit like a sleeping bear. You can do some business if you tiptoe around it, but you better not poke at it. Likewise, you might be open to learning something new from your neighbor on social media, but your brain might be in doubt if their avatar is a smiling picture of the wrong political candidate staring you in the face.
PNAS2018. DOI: 10.1073/pnas.1722664115 (About DOIs).