
Autonomous vehicles are likely to become available in the near future. A new article published in Science raises a classic ethical question about how to program these cars: Should a car sacrifice its driver if it can save the lives of many pedestrians?
The article found that participants generally Doing want cars to be programmed this way for other drivers, but they don’t want their own cars to work this way. It’s a potentially lethal form of “Not-In-My-Backyard” for our more automated future.
For this article, researchers conducted six online surveys between June and November 2015. Participants were recruited through the Amazon Mechanical Turk platform. Each of the six online surveys included approximately 200 to 450 participants.
Public good versus individual behavior
In the first study, 76 percent of participants said it was morally correct for self-driving cars to sacrifice one passenger to save 10 pedestrians. That’s an overwhelming preference for cars that are programmed in a utilitarian way, reducing overall casualties in an accident. These participants were not concerned that programming would be too utilitarian.
The second study showed that the participants did not think that cars would have to sacrifice their passenger if only one pedestrian was saved. Their willingness to sacrifice one passenger increased as the number of pedestrian casualties increased. In this study, participants supported passenger sacrifice, even if they were the passenger. This effect was continuous – there was no threshold at which people were suddenly willing to sacrifice the passenger. Instead, as the number of pedestrians rescued increased, so did the willingness to sacrifice the passenger.
However, this result changed when participants were asked to imagine a family member being sacrificed in the car. In this scenario, not only were participants less likely to support passenger sacrifice, support was significantly lower than support for self-sacrifice.
When the situation was made more concrete, the behavior changed. Respondents to a third survey said they would be less likely to buy a self-driving car if it was programmed to sacrifice the passenger to minimize casualties. When asked to consider buying a self-sacrificing car, respondents reported a 50 percent chance of buying one. When asked to consider a car that a family member would sacrifice to reduce the overall casualty rate, this number fell to 19 percent.
These participants still believed that programming to reduce overall casualties was the most moral choice – they just didn’t want this kind of car for themselves or their families.
Survey four used a more complex algorithm-based ranking system to assess the same discrepancy between moral belief and buying behavior. This survey again showed that the participants supported the existence of utilitarian, self-sacrificing, self-driving cars for the good of society. But like the participants in survey three, they didn’t want to own one of these self-sacrificing cars.
These studies demonstrate a well-known social phenomenon: people tend to favor global scenarios that will lead to the best societal outcome, but they often don’t want to stick to that decision. In general, when these types of issues arise (the public interest versus individual choice), legislation can be used to ensure individual compliance – a classic example of this type of regulation is mandatory vaccination for school children.
An autonomous trolley
The remaining surveys looked at participants’ attitudes toward legislation that would mandate utilitarian self-sacrificing programming for autonomous cars. While participants still agreed that cars programmed to reduce total casualties through self-sacrifice were morally correct, they were reluctant to accept a law mandating this programming. They were also less likely to buy one of these cars if regulations enforced this programming.
This article shows that self-driving cars may be the latest application of a classic ethical dilemma called the “trolley problem.” It’s a way of finding out what people are willing to sacrifice for the common good as people are asked how they would drive a trolley that has two options: crashing into a single person or plowing into an entire group.
Research has shown that 80 to 90 percent of people typically choose to sacrifice one person to save many others. But the self-driving car version of this problem brings this up by forcing people to imagine that the person sacrificed is a relative or yourself. At this point, the decision-making process begins to change.
As the availability of self-driving cars approaches the market, ethical issues related to the programming of these machines become increasingly relevant. The authors of this study suggest that there are three major groups that will influence programming decisions: manufacturers, consumers, and legislators.
In the absence of legislation, manufacturers are free to make certain ethical decisions regarding the programming of their vehicles, and consumers can influence programming based on their purchasing preferences. If these two forces are allowed to guide market decisions, it seems that self-sacrificing vehicles are not widely adopted, as consumers do not seem to want their own families put at risk for greater societal benefit (even though they agree with this type of programming morally is correct). If demand is low, manufacturers may not make this type of car, which could significantly reduce the overall benefit of self-driving cars.
At the same time, the studies suggest that these cars may not be in common use if this programming is mandated by law. This presents a tricky cost-benefit analysis that legislators and manufacturers will need to carefully consider as they plan to make these cars available to consumers.
Science2016. DOI: 10.1126/science.aaf2654 (About DOIs).
Frame image by Ford