Engineers Say If Automated Cars Experience 'The Trolley Problem,' They've Already Screwed Up
from the I'm-sorry-I-can't-do-that,-dave dept
As self-driving cars inch closer to the mainstream, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? This so-called "trolley problem" has been debated at universities for years, and while most consumers say they support automated vehicles that prioritize the lives of others on principle, they don't want to buy or ride in one, raising a number of thorny questions.Should regulations and regulators focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others prioritize worries of liability over human lives when choosing the former or latter?
Fortunately for everybody, engineers at Alphabet's X division this week suggested that people should stop worrying about the scenario, arguing that if an automated vehicle has run into the trolley problem, somebody has already screwed up. According to X engineer Andrew Chatham, they've yet to run into anything close to that scenario despite millions of automated miles now logged:
"The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up."That automated cars will never bump into such a scenario seems unlikely, but Chatham strongly implies that the entire trolley problem scenario has a relatively simple solution: don't hit things, period.
"It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he added. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer."It's still a question that needs asking, but with no obvious solution on the horizon, engineers appear to be focused on notably more mundane problems. For example one study suggests that while self-driving cars do get into twice the number of accidents of manually controlled vehicles, those accidents usually occur because the automated car was too careful -- and didn't bend the rules a little like a normal driver would (rear ended for being too cautious at a right on red, for example). As such, the current problem du jour isn't some fantastical scenario involving an on-board AI killing you to save a busload of crying toddlers, but how to get self-driving cars to drive more like the inconsistent, sometimes downright goofy, and error-prone human beings they hope to someday replace.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: autonomous vehicles, ethical dilemma, self-driving cars, trolley problem
Reader Comments
The First Word
“No, it's really not, for two reasons.
1) Chatham's right. There's a reason the Trolley Problem is a thought experiment, not a case study.
2) In a world of imperfect computer security, there's only one possible right answer: always protect the people inside the car, period. If you build functionality into the car to kill the people inside the car, that becomes an attack vector that hackers will end up using to kill far more people (even if that number is never more than 1) than a legitimate Trolley Problem dilemma ever will. (See point #1.)
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
Re: Dumb question.
Thinking is a great idea, but I'll bet ya you aren't doing it.
[ link to this | view in chronology ]
Re:
Actually with modern ABS and stability control systems that's exactly what you should do. These systems are much better than most drivers.
"But I'll bet ya they aren't."
Yes, you're much smarter than they are, which is why you thought of it and they didn't...
[ link to this | view in chronology ]
He's right.
[ link to this | view in chronology ]
Re: He's right.
Also, when you run into that bank of fog, without being able to leave the freeway before you reach it and you cannot predict it, you now how to decide on your speed until you can get to safety. (Hint, people drive on reads they have never traveled before, and where they do not know the local weather and other anomalies).
[ link to this | view in chronology ]
Re: He's right.
[ link to this | view in chronology ]
Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: He's right.
The visibility was such that someone coming up on me at 60+ could see my tail light in time to overtake, but not to slow down to my speed, and not knowing the road, I did not where, if anywhere, there were bends that required slower speeds.
[ link to this | view in chronology ]
Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: Re: Re: Re: He's right.
[ link to this | view in chronology ]
Re: He's right.
[ link to this | view in chronology ]
dIDNT FIGURE THIS UNTIL now
Its taken me awhile to figure liability out from an automated car..
See..IF' Im not driving, WHO pay insurance, who is responsible for this car? And IF' Im paying a small fortune for a car, WHY not just get a driver, for LESS..
The Thing that will happen, is UPON buying said type of vehicle, you will be introduced to a LIST.
This list will BE' the programming the car drive on..
Drive threw the GROUP or kill the passenger?
Speed if the traffic is slow?
Speed if it is allowed?
Drive CLOSE to large vehicles?
Maintain speed only in CITIES?
That is the only way they can transfer Liability, and responsibility..
I will wait and record, the FIRST good rainy night, on a back road, where the Street lines Arnt really there...
[ link to this | view in chronology ]
Re: dIDNT FIGURE THIS UNTIL now
[ link to this | view in chronology ]
Re: dIDNT FIGURE THIS UNTIL now
[ link to this | view in chronology ]
FUZZY LOGIC
Fuzzy Logic to the rescue!
There is nothing that fuzzy logic can't fix.
:-)
[ link to this | view in chronology ]
Better keep those cars in California then, because that's the LAST thing you do with snow or ice!
[ link to this | view in chronology ]
Re:
Newsflash, buddy - cars have ABS now. Braking on snow and ice is fine. As is steering around obstacles...
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Humans are incredibally bad at understanding risks, and we generally accept order of magnitudes higher risks if we feel a sense of control. Just face it, a computer will be able to focus on the entire surrounding, all the time. It can optimize break torque on each wheel before you even react that something is on the road, it never gets tired, annoyed or distracted. It will absolutely not be perfect, but if the goal is to save lifes, you only need to beat humans who are terrible drivers! I really don't get the "must be perfect" argument.
Report: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/811059
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Unless you are a very experienced driver who regularly practices extreme braking (e.g. a race driver), your trust is misplaced.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Nobody wants the car to drive poorly
This is IMO inaccurate. What we want instead is for them to get better at allowing for the goofy humans (like seeing that a human is being a bit too eager behind you on the right on red situation).
[ link to this | view in chronology ]
No, let the cars be careful. The humans will eventually adjust, especially as the self drivers become more common.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
No, I like it as originally stated. They are too cautious compared to normal drivers, not your 85 year old father, or you.
[ link to this | view in chronology ]
no good answer
And I agree completely with comment number 11. I'm the same. I will always trust more myself than any equipment in my car. And honestly I can't imagine that automated car will be able to adjust to every kind of situation, weather etc.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Me/Not-Me
Not if they want ME to buy or ride in one. At the very least, the choice needs to be represented by a toggle switch on the dashboard labeled "Me/Not-Me."
[ link to this | view in chronology ]
Re: Me/Not-Me
[ link to this | view in chronology ]
all mood
[ link to this | view in chronology ]
Re: all mood
[ link to this | view in chronology ]
Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: Re: all mood
"Always drive at a speed where you can deal with "sudden" changes in the roadway."
You mean at a speed where you can dodge an airplane randomly landing on a road?
http://www.startribune.com/small-plane-lands-on-i-35-near-wyoming-closes-highway/385284761/
.. Or how about a medical emergency that causes a massive pile up...
https://www.washingtonpost.com/news/dr-gridlock/wp/2016/07/12/crash-involving-about-20-vehicles -along-i-395-jams-traffic-in-arlington-county/
How about a random mechanical failure?
http://www.aa1car.com/library/auto_accident.htm
You can't cover all the bases with "to fast for conditions". That shit doesn't work in real life.
Adding, or in some cases counteracting the "random event" part of driving is the human aspect; You don't even realize how many decisions you make when you drive, how many intuitive control responses you send to the wheel/pedals... you are not only driving your car, you are assessing everything around you, both factually and emotionally. You can see the idiot on teh cell phone or texting and decide if he's a threat before he gets into your comfort zone...You may see a flash, hear an odd sound, get a funny feeling, see something that "bugs" you....etc.
Now, you may be able to "program" some of these things into a computer, great. If only computers are driving, then it may work. But one emotional, uncontrollable, irrational, irresponsible person gets behind the wheel... Or one random event or mechanical failure and the entire logic and statistic based system comes crashing down into chaos.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: all mood
Heres the catch 22 with that.
You cannot go more than a few hundred yards without either your journey being recorded or being arrested for criminal behavior; that is hiding where you want to go.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: all mood
With the suddenly stopped traffic, just point the sensor further down the road and make sure it can handle the extra info.
Sudden brake failure. This occurs exceedingly rarely. Cars have sensors. autonomous cars necessarily monitor how much braking occurs when brakes are used vs. expected brake force. When those do not match, you can have a failure condition to reduce speeds in line with actual brake force available, pull over, alert for service, refuse to move, etc. So you have to have, at a minimum, a cascade failure where the brakes fail AND the automation needs to respond to an emergency without having used the brakes post failure. Even then, an autonomous car would be able to detect the failure and determine that it is unable to stop in time far faster than a human would be able to without panicking. It could then decide to pull the emergency brake or simply increase brake pressure. Advanced systems may even redistribute brake force to route around the failed brake. These are not hard cases to design for.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: all mood
[ link to this | view in chronology ]
Re: Re: Re: Re: all mood
Better example: Don't tailgate anyone, anywhere, under any circumstances.
Around here, the single most effective thing the police could do to make roads safer is to start treating tailgaters exactly the same way as drunk drivers. It's really that bad.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: all mood
[ link to this | view in chronology ]
The war on general purpose driving
Speaking of which, how long before the feds and local cops have backdoors into our self-driving cars so they can take them over/disable them? My guess is they'll be able to turn while neighborhoods into no-drive zones.
You know, to protect the children. Never to keep people from peaceably assembling or to keep journalists away from something they want to hide. Our government doesn't do those kinds of things.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Link
http://moralmachine.mit.edu
[ link to this | view in chronology ]
Re: Link
The quiz also assumes the brakes are out - well, has the car tried the parking brake? If that's somehow out too, has it tried engine braking? (If you tell me all brakes are out and the transmission is also out, I'm frankly going to question why you assume you still have steering and a working computer.) Has it tried the horn? If the car can't avoid everyone on its own, reducing the speed of impact and giving pedestrians slightly more time to get *themselves* to safety should be the course of action, not trying to decide who should die.
[ link to this | view in chronology ]
IRL?
[ link to this | view in chronology ]
But of course accidents occur because people or CPUs put themselves in conditions that allow no time to make a decision of value. Most people would simply freeze, while a CPU would more likely be able to keep searching for an answer.
Any number of "what if" strawmen can be invented. Their utility other than discussion in ethics seminars has no meaning in the real world.
[ link to this | view in chronology ]
self driving cars on the road,
on some roads many people drive over the speed limit,
will a self driving car drive over the speed limit
in order to reduce the chance of an accident .
human drivers know driving slowin certain situations can cause accidents ,
self driving cars could have red lights or be painted yellow like a taxi to let other drivers know they will not
react as a human driver does .
in the case of an emergency .
what,ll happen when 30-50 per cent of cars in a city are self driving cars.
[ link to this | view in chronology ]
No, it's really not, for two reasons.
1) Chatham's right. There's a reason the Trolley Problem is a thought experiment, not a case study.
2) In a world of imperfect computer security, there's only one possible right answer: always protect the people inside the car, period. If you build functionality into the car to kill the people inside the car, that becomes an attack vector that hackers will end up using to kill far more people (even if that number is never more than 1) than a legitimate Trolley Problem dilemma ever will. (See point #1.)
[ link to this | view in chronology ]
Re:
I will add another reason, one closely related to Chatam's but not quite the same. You will never face the choice with certainty. The real world is too unpredictable for that. Instead, you will face something more like the choice of increasing the chance of killing 1 person by decreasing the chance of killing 2 or vice versa. In that case, choose the one that has the best chance of having no fatalities at all. But, as Chatham points out, even getting to that probabilistic point.
[ link to this | view in chronology ]
The Trolley Problem Would Never Happen on a Real Railroad.
(*) I once knew a Doberman bitch who chased after balls in a uniquely stylish way. As she approached the ball, she stuck out a front foot, and did a "four-footed pirouette," with the other three feet in the air, reached out her mouth to snatch the ball, and, landing, dashed back the way she had come, with absolutely no wasted motion. It was a purely ballerina move.
The Trolley Problem, in particular, was contrived by someone who knows very little about trains. It is incoherent in its own terms, and does not make a distinction between trains and trolleys. Trolleys are designed to run on the public street, and they have good brakes which apply directly to the rail, rather than to the wheel. Trolleys don't operate at very high speed, because the whole point of trolleys is to pick up passengers at short intervals.
Railroad switches have speed limits on the diverging branch, often as low as fifteen miles per hour, and a train which goes over them faster will derail. A switch which works at high speed has to be correspondingly long, and correspondingly expensive. Taking a typical railroad curvature, a radius of a mile or so (much the same as the interstate highways), a full-speed crossover switch of the type described might have to be several hundred feet long, and take something on the order of ten seconds for the train to traverse. Most switches are designed on the assumption that the train will first slow down. I've got a magazine picture somewhere of the aftermath when a Toronto Go Transit train tried to take a 15 MPH switch at 60 MPH. The usual reason for there to be a switch splitting one track into two parallel tracks is to create a siding. Sidings are generally located where a train can stop without blocking traffic, in short, somewhere other than a traffic crossing point. The railroad is not going to spend large sums of money to build vast numbers of switches to create ethical conjectures. If the railroad has the money to spend, it will build overpasses and underpasses instead, seeking to isolate itself from road traffic, not to make moral conjectures about which car to hit.
Short of that level of opulence, there is a market for "bulletproof" crossing gates, strong enough to resist if some fool attempts to simply drive through them. Amtrak has installed at least one in Western Michigan. These gates are designed on the tennis-court-net principle, whereby their flexibility is their strength, and they decelerate an errant car much less violently than colliding with a fixed object would. Grade crossings can be fitted with Lidar detectors, which confirm that the grade crossing is in fact empty, and if not, they trigger an alarm which causes trains miles away to start braking. Railroad accidents tend to happen because equipment is old, and has not been brought up to "best practices."
The single worse railroad accident in North America in many years was the Lac Megantic accident in Canada. It involved a "dummy company" railroad, which was operated with a view to extreme cheapness. One of their oil trains ran away on a long grade, and reached about 70 mph under the influence of gravity. It rolled into a small town, and derailed. Due to the speed of the derailment, many of the tank cars broke open, spilling thousands of tons of oil, and producing a huge fire, which destroyed much of the town, and killed forty-three people. During the investigation, it emerged that the railroad was running its trains "with string and chewing gum," and that this was the cause of the accident. That is the most basic hazard which the railroads present. They haul around Hiroshima-loads of fuel and chemicals.
[ link to this | view in chronology ]
Re: The Trolley Problem Would Never Happen on a Real Railroad.
We don't need the car to decide who to hit, we just need the car to refuse to go in the first place when it detects that your brake system is in critical need of maintenance.
[ link to this | view in chronology ]
Re: Re: The Trolley Problem Would Never Happen on a Real Railroad.
[ link to this | view in chronology ]
Re: The Trolley Problem Would Never Happen on a Real Railroad.
Almost. Assuming the threat is directly ahead of you.
A few months ago, I was speeding up an on-ramp, which of course is the whole point of having an on-ramp, when some stupid teenage kid with a bicycle comes out of nowhere and makes like he's about to cross right in front of me. (This was at least 100 feet beyond the point where there are supposed to be no pedestrians, so I wasn't really paying attention to the side of the road when I had more important concerns to focus on in front of me and in the other lane.)
In this scenario, if I had braked, and he'd stepped out, I'd have ran him down and probably killed him, because there wasn't space to decelerate very far. If I had sped up, on the other hand, and he'd stepped out, he'd have hit my car from the side, which would have injured him a whole lot less.
Instead, I hit the horn and swerved to make a collision less likely, and he checked himself right at the last second and didn't step out into traffic after all. But this is one case where braking would have been the worst possible result.
[ link to this | view in chronology ]
Re: Re: The Trolley Problem Would Never Happen on a Real Railroad.
Anyway, I don't suppose you could have swerved more than five or ten feet sideways, and that is no distance for a bicycle to cover. I'd say it was probably the horn that averted an accident.
Here's something I came across. It seems there's this Argentine ballet dancer, Lucila Munaretto, trained in a Russian Bolshoi school in Brazil, who got a modestly paid job, dancing in Vancouver, Canada, with a small semi-professional company which puts on about two shows a year, and does programs in the schools. She was making ends meet by working in a small bakery. Well, she went roller-blading in the street (without a helmet, what's more), collided with a minivan, and sustained head injuries. She seems to have mostly recovered, and they've got her dancing again.
The Canadian national health insurance paid for her medical care per se, but it provides only limited coverage for things like physical therapy, not for something on the order of stroke recovery. The dance company started a funding drive, presumably among its audience, and raised $40,000, and they got a $150,000 line of coverage from the mini-van's accident insurance.
http://www.cbc.ca/news/canada/british-columbia/ballet-lucila-munaretto-returns-to-stage-ov er-horrific-accident-1.3560275
[ link to this | view in chronology ]
Re: Re: The Trolley Problem Would Never Happen on a Real Railroad.
[ link to this | view in chronology ]
Re: The Trolley Problem Would Never Happen on a Real Railroad.
The thought experiment has set up an analogy to try to explain the experiment that is, admittedly, not fully applicable to the experiment.
Forgetting the analogy, the thought experiment is asking this:
If you had 2 exclusive choices, i.e. you could only do ONE of the two choices, which would you choose out of the 2 following options:
1) take an action that would save the life a number of people (usually 5 or more) but result in the death of 1 person, or
2) take NO action and allow the number of people (5+) to die, while saving the life of that 1 person?
Which choice would you make?
1) take the action, save 5, kill 1, or
2) take no action, let 5 die, let 1 live.
Variations on this assign a personal relationship to that single life that could live/die, thus making a personal link to the decision, reversing the no action vs action results (no action 5 live, 1 dies), adjusting the size of the group of people who will be saved/killed.
[ link to this | view in chronology ]
Re: Re: The Trolley Problem Would Never Happen on a Real Railroad.
[ link to this | view in chronology ]
Re: Re: The Trolley Problem Would Never Happen on a Real Railroad.
The facts of a bad accident, such as the Lac Megantic accident in Canada, are usually such that everyone loses. The town is burned down, all these people are dead, the train has been smashed up, future use of the track is mired in the ultimate NIMBY case, the railroad is bankrupt, the locomotive engineer and railroad dispatcher are going to prison for many years for many counts of the Canadian equivalent of manslaughter, and even the big boss, nominally protected by dummy corporation cut-outs, has been disgraced, and will experience difficulty getting a new job. The company which bought the bankrupt railroad also loses, because it underestimated the depth of the NIMBY opposition.
In railroading, there is a device called a "derail," a clamp which you bolt to the track to cause a train to derail at low speed, instead of running away. I don't know what a derail would cost-- it's a very simple device, but made in very small quantities. Five hundred bucks might be about right. It is prudent to put a derail in every locomotive cab, as cheap insurance, and I should think you could clamp it onto the track in less than five minutes. That would probably have been enough to prevent the Lac Megantic accident. However, the railroad had a culture of compulsive cost-cutting-- it was the kind of place where the boss rules by terror, and does his nut if you buy a box of pencils.
"For the want of a horse-shoe nail, the horse-shoe was lost, and for want of the horse-shoe, the horse was lost, and for want of the horse, the rider was lost, and for want of the rider, the battle was lost, and for want of the battle, the kingdom was lost, and all for the want of a horse-shoe nail."
It costs a lot of money to design and build equipment in such a way that the risks balance out in such a way as to create an ethical dilemma. The only kind of apparatus which has fine control of people getting killed, to the point that you can construct dilemmas, is executioners' apparatus: gas chambers, electric chairs, gallows, guillotines, etc.
[ link to this | view in chronology ]
we all notice the differences when we ride in cars less or more expensive than our own. even the sounds the turn signals and seat belt warnings make are tinny and annoying in the lower-price cars. will automated control systems in cheaper cars be cheaper and less reliable or will all cars be mandated to have the best of control? will low-paid people be able to afford cars at all if the high-quality controls systems are mandated? if there are levels of quality in control systems, will there be different routing for different levels? poor people have to drive way out of the way to get where they need to go on flint-quality roads and elites-only drive on the well-maintained roads?
if we segregate the poor, how do we explain to them that we're all in this together when the next world war comes around? that's going to be some cagey rhetoric. or maybe we assume robots and other bots fight this next one, huh?
[ link to this | view in chronology ]
i, robot
[ link to this | view in chronology ]
Re: i, robot
[ link to this | view in chronology ]
obvious
And I'm not just talking about accidents : the recent truck attack in France would have been impossible or far more difficult (it would require skilled engineers) if all cars and trucks were automated.
[ link to this | view in chronology ]
Re: obvious
If we could shut down the current system and replace it with something else, we would have had self-driving cars long ago, but you can't simply stop the world and change everything overnight.
[ link to this | view in chronology ]
Re: Re: obvious
While unrealistic for a number of reasons, we SHOULD have done this if the only concern we have to deal with was safety. Having horses on the road was (and still is) dangerous because they behave in a less-predictable way than cars.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Bah
[ link to this | view in chronology ]
Re: Bah
[ link to this | view in chronology ]
Trolley problem eclipsed by modern cars.
Stop on the brakes.
Stay on the brakes.
Steer away from danger.
Trolley problem is an issue when none of the above are done. Add to that the level of awareness the automated car has over the mediocre driver. They are seconds ahead of the driver in recognition of potential problems.
[ link to this | view in chronology ]
Re: Trolley problem eclipsed by modern cars.
s/Stop/Stomp/
[ link to this | view in chronology ]