Engineers Say If Automated Cars Experience 'The Trolley Problem,' They've Already Screwed Up
from the I'm-sorry-I-can't-do-that,-dave dept
As self-driving cars inch closer to the mainstream, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? This so-called "trolley problem" has been debated at universities for years, and while most consumers say they support automated vehicles that prioritize the lives of others on principle, they don't want to buy or ride in one, raising a number of thorny questions.Should regulations and regulators focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others prioritize worries of liability over human lives when choosing the former or latter?
Fortunately for everybody, engineers at Alphabet's X division this week suggested that people should stop worrying about the scenario, arguing that if an automated vehicle has run into the trolley problem, somebody has already screwed up. According to X engineer Andrew Chatham, they've yet to run into anything close to that scenario despite millions of automated miles now logged:
"The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up."That automated cars will never bump into such a scenario seems unlikely, but Chatham strongly implies that the entire trolley problem scenario has a relatively simple solution: don't hit things, period.
"It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he added. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer."It's still a question that needs asking, but with no obvious solution on the horizon, engineers appear to be focused on notably more mundane problems. For example one study suggests that while self-driving cars do get into twice the number of accidents of manually controlled vehicles, those accidents usually occur because the automated car was too careful -- and didn't bend the rules a little like a normal driver would (rear ended for being too cautious at a right on red, for example). As such, the current problem du jour isn't some fantastical scenario involving an on-board AI killing you to save a busload of crying toddlers, but how to get self-driving cars to drive more like the inconsistent, sometimes downright goofy, and error-prone human beings they hope to someday replace.
Filed Under: autonomous vehicles, ethical dilemma, self-driving cars, trolley problem