People Support Ethical Automated Cars That Prioritize The Lives Of Others -- Unless They're Riding In One
from the I'm-sorry-I-can't-do-that,-Dave dept
As self-driving cars have quickly shifted from the realm of science fiction to the real world, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? For example, should your automated vehicle be programmed to take your life in instances where on board computers realize the alternative is the death of dozens of bus-riding school children? Of course the debate technically isn't new; researchers at places like the University of Alabama at Birmingham have been contemplating "the trolley problem" for some time:"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"It's not an easy question to answer, and obviously becomes more thorny once you begin pondering what regulations are needed to govern the interconnected smart cars and smart cities of tomorrow. Should regulations focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others be more or less likely to support the former or the latter for liability reasons?
Not too surprisingly, people often support the utilitarian "greater good" model -- unless it's their life that's at stake. A new joint study by the Toulouse School of Economics, the University of Oregon and MIT has found that while people generally praise the utilitarian model when asked, they'd be less likely to buy such an automated vehicle or support regulations mandating that automated vehicles (AVs) be programmed in such a fashion:
"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote...The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."To further clarify, the surveys found that if both types of vehicles were on the market, most people surveyed would prefer you drive the utilitarian vehicle, while they continue driving self-protective models, suggesting the latter might sell better:
"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."This social dilemma sits at the root of designing and programming ethical autonomous machines. And while companies like Google are also weighing these considerations, if utilitarian regulations mean less profits and flat sales, it seems obvious which path the AV industry will prefer. That said, once you begin building smart cities where automation is embedded in every process from parking to routine delivery, would maximizing the safety of the greatest number of human lives take regulatory priority anyway? What would be the human cost in prioritizing one model over the other?
Granted this is getting well ahead of ourselves. We'll also have to figure out how to change traffic law enforcement for the automated age, have broader conversations about whether or not consumers have the right to tinker with the cars they own, and resolve our apparent inability to adhere to even basic security standards when designing such "smart" vehicles. These are all questions we have significantly less time to answer than most people think.
Filed Under: ai, autonomous cars, ethical choices, trolley problem