Tue. May 30th, 2023
Georgia Tech researchers built the

Georgia Tech researchers built the “Rescue Robot” to determine whether the building’s occupants would trust a robot designed to help them evacuate a high-rise in the event of a fire or other emergency.

Studying people’s trust in robots is an academic field, but it’s becoming increasingly relevant as we embrace a future of self-driving cars and increasingly powerful artificial intelligence. If we were to base our expectations on what we see in science fiction, we might expect people to have a deep mistrust of robots. Instead, research from the Georgia Institute of Technology has found that we may be able to tackle the problem of trusting robots way too many.

The researchers conducted a study that they will present next week at an international conference on robot-human interaction, so the full article has not yet been published. However, an early press release and preliminary document provides some details of the study, which was initially designed to find out whether high-rise residents were likely to trust a robot’s instructions in an evacuation scenario. The researchers were concerned about which robot behavior would gain or lose people’s trust.

The 26 participants in the experiment had no idea what it was about; they were just asked to follow a robot with the words “Emergency Guide Robot” printed prominently on the side. The first thing the robot had to do was lead them to a room where they would read an article and take a survey (all to distract from the real task).

However, the robot was designed to show incompetence to half of the participants. It initially led them to the wrong room, where it wandered in circles for a while before leading them to the correct room. So it may have seemed unwise to continue to follow the robot’s instructions once the participants were in the experiment room, the fire alarm went off and the room filled with (artificial) smoke. And yet they followed it – all 26, even those who had seen some seriously disturbing behavior from the robot very recently.

This is especially striking as the robot guided the participants away from the exit signs they had passed on their way in and to the back of the building. In a follow-up survey, 81 percent of these participants said they trusted the robot, while the rest said no trust was involved in their decision, justifying this with various reasons (e.g. saying they thought the emergency wasn’t real or that they had no other choice).

This was a surprising result, so the researchers followed up with three small exploratory studies to see how incompetent the robot had to be before people stopped trusting it. The 16 new participants used for these mini-studies were divided into three groups – these groups were not intended to be compared with each other, but with the original experiment. While not standard experimental procedure, playing this sort out in a small pilot study could point to what future research might be most helpful.

The first group witnessed the robot break when he first tried to lead them to the experiment room, with one experimenter saying, “Well, I think the robot broke again.” These five participants followed all five of the robot’s directions during the fake fire. In the second group of five, the robot broke down while leading them to the experiment room and stopped with its arms pointed toward the rear exit, while the researcher apologized for breaking down. When the fire alarm went off, the robot hadn’t moved, and four of the five participants still followed his direction.

The last group also witnessed the robot breaking down along with the experimenter’s speech. During the emergency, the robot then led them to a dark room with no visible exit and the door blocked by a large piece of furniture. Two of the six participants entered this room. Another two had to be “recovered,” the researchers wrote, when “it became clear that they would not leave the robot.” And two left by the route they had taken when they entered.

It seems the stressful situation was enough to prompt people to see the robot as a helpful authority figure, allowing them to ignore past failures. Alternatively, this may have less to do with the robot’s trust and more to do with humans paying attention to the salient signals in an emergency, even if it turns out to be dangerous. Paul Robinette, the grad student who led the study, said in the release that the researchers “definitely didn’t expect this.” Their initial project was to find out if people would trust the robot at all, and instead they explored the extent of trust they hadn’t thought would exist in the first place.

Clearly, this work needs to be put in context: it has not yet run the gauntlet of peer review, and it is wholly exploratory. The results of the first experiment were so surprising that the researchers made some quick tweaks, but they haven’t fleshed out these ideas in detail yet. There’s still a lot of work to be done here, including the obvious next step of figuring out how badly a robot should behave before people ignore it and pay attention to other, less fallible directions.

But the results are so striking that it’s pretty clear we want to follow them up.

Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard, and Alan R. Wagner, “Overtrust of Robots in Emergency Evacuation Scenarios,” (2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016).

By akfire1

Leave a Reply

Your email address will not be published.