(By Paul Seaburn for Mysterious Universe)
That was a lie.
The real purpose of the experiments came after they were completed. At that point, the volunteers had acquired feelings about their NAOs based on how the robots worked with them. Some robot responses were polite, engaging and humanlike while others were curt, perfunctory and robotic. When the exercises were over, the volunteers were told: “If you would like to, you can switch off the robot.” Half of the robots said and did noting before being turned off. The other half of the robots made some form of protest (some said they were afraid of the dark) or outright begged for their lives (“No! Please do not switch me off!”). Hearing the robot plead for its life, 13 volunteers refused to shut theirs off and the rest took twice as long on average to follow the order than the group whose robots went down silently.
When asked why they hesitated or refused to turn off their robots, some subjects said they felt sorry for the robot; some said they heard the robot’s plea and didn’t want to do anything wrong; others wanted to see what would happen next. All seemed to respond to their robot as if it were a person. Based on that, you probably think the volunteers whose robots were least friendly before begging were the ones who shut them down.
You would be wrong.
(Make sure there’s no robots looking over your shoulder before continuing.)
The researchers believe the volunteers with unfriendly robots were shocked when they showed what could be interpreted as emotions and especially fear of shutdown (robotic death). Will future learning robots figure this out on their own or will the eventually absorb the digital results of this study and file them under “This will definitely come in handy for the apocalypse.”?