People Support Ethical Automated Cars That Prioritize The Lives Of Others -- Unless They're Riding In One
from the I'm-sorry-I-can't-do-that,-Dave dept
As self-driving cars have quickly shifted from the realm of science fiction to the real world, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? For example, should your automated vehicle be programmed to take your life in instances where on board computers realize the alternative is the death of dozens of bus-riding school children? Of course the debate technically isn't new; researchers at places like the University of Alabama at Birmingham have been contemplating "the trolley problem" for some time:"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"It's not an easy question to answer, and obviously becomes more thorny once you begin pondering what regulations are needed to govern the interconnected smart cars and smart cities of tomorrow. Should regulations focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others be more or less likely to support the former or the latter for liability reasons?
Not too surprisingly, people often support the utilitarian "greater good" model -- unless it's their life that's at stake. A new joint study by the Toulouse School of Economics, the University of Oregon and MIT has found that while people generally praise the utilitarian model when asked, they'd be less likely to buy such an automated vehicle or support regulations mandating that automated vehicles (AVs) be programmed in such a fashion:
"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote...The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."To further clarify, the surveys found that if both types of vehicles were on the market, most people surveyed would prefer you drive the utilitarian vehicle, while they continue driving self-protective models, suggesting the latter might sell better:
"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."This social dilemma sits at the root of designing and programming ethical autonomous machines. And while companies like Google are also weighing these considerations, if utilitarian regulations mean less profits and flat sales, it seems obvious which path the AV industry will prefer. That said, once you begin building smart cities where automation is embedded in every process from parking to routine delivery, would maximizing the safety of the greatest number of human lives take regulatory priority anyway? What would be the human cost in prioritizing one model over the other?
Granted this is getting well ahead of ourselves. We'll also have to figure out how to change traffic law enforcement for the automated age, have broader conversations about whether or not consumers have the right to tinker with the cars they own, and resolve our apparent inability to adhere to even basic security standards when designing such "smart" vehicles. These are all questions we have significantly less time to answer than most people think.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, autonomous cars, ethical choices, trolley problem
Reader Comments
Subscribe: RSS
View by: Time | Thread
And then...
It doesn't matter in the grand scheme of things - people will die no matter how "safe" we try to make things, and we just have to accept it. This is the reality of mortality that we seem unwilling to accept more each generation.
[ link to this | view in chronology ]
Re: And then...
i was thinking of a somewhat parallel alternative future: the car software is hacked such that it overrides whatever the factory set, and had 'maximum' protection for the occupants...
i mean, i'm certain the software could never be hacked or overridden or anything...
no bugs or nuthin'...
cars don't ever have power glitches...
...and they'll be flying by 2017 ! ! !
[ link to this | view in chronology ]
Re: Re: And then...
[ link to this | view in chronology ]
Re: Re: Re: And then...
If the car likes the owner the car will tend to favor the owner over someone else. If the car doesn't like the owner it may favor the life of someone else over that of the owner. Car enthusiasts would be less likely to die because the car won't let them.
[ link to this | view in chronology ]
Re: Re: Re: Re: And then...
[ link to this | view in chronology ]
Re: Re: And then...
[ link to this | view in chronology ]
Re: Re: Re: And then...
[ link to this | view in chronology ]
Re: And then...
Which raises a further question. Will self-driving vehicles always choose to kill those they are carrying or will they be allowed to "merely" kill the fewest number of people? Of course to do that they will need to know (a) how many humans they are carrying, and (b) how any people they are about to run down.
And who do aggrieved families sue for compensation? The insurance company for the vehicle's owner presumably, but of course that assumes that such people will BE insured once self-driving cars and buses come along. Can an owner of a vehicle be held liable if they themselves weren't actually driving the vehicle themselves? (If memory serves they are usually not liable at the moment. Instead the person who actually drove the vehicle is the liable one. Will it become possible in the future to sue the AI or other piece of software which actually drove the vehicle?)
What about the software company which wrote the software which made the decision to kill those people? Can they be held liable for those deaths? Or will the law grant them immunity, much as the law grants immunity to doctors who carry out abortions and soldiers who kill enemies during wartime?
[ link to this | view in chronology ]
Re: And then...
[ link to this | view in chronology ]
Re: And then...
[ link to this | view in chronology ]
Re: Re: And then...
[ link to this | view in chronology ]
Solution
[ link to this | view in chronology ]
Re: Solution
[ link to this | view in chronology ]
Damn Easy!
Oh yes it is!
Save your child, that's the answer! The number of people that have to die is not relevant to the problem. It's a classic diversionary tactic to make people think about unimportant shit!
While no human is more valuable than another there is a fundamental dissonance required to wrap your head around, sacrificing your child for the greater good to be served.
We pontificate on these bullshit scenarios while simultaneously giving money to Monopolies like the NFL, FIFA, and Gambling. Humans are intrinsically corrupt as hell! We all pretend we are somehow better than this and concoct these stupid scenarios in self masturbation while playing out some of the worst in our very lives without batting an eye.
As a parent you damn sure better pick your child over the others because if you are willing to sacrifice your child, who else will they have to protect them?
I have no children and know the answer to this one EASY!
[ link to this | view in chronology ]
Re: Damn Easy!
[ link to this | view in chronology ]
Re: Re: Damn Easy!
Fuck I hope you never have children!
[ link to this | view in chronology ]
Re: Re: Re: Damn Easy!
I said no such thing, it must be those demons in your head whispering those awful things.
Clearly it is the NFL, FIFA, and Gambling that are responsible for all the evil in the world today just look at all those corrupt masturbating people, you can tell who they are from their lack of blinking.
Is your lack of children due to your masturbation?
[ link to this | view in chronology ]
Re: Re: Re: Re: Damn Easy!
[ link to this | view in chronology ]
Re: Damn Easy!
Does it occur to you that a person may have more than one child and may have to choose between them?
There are real life examples of this - drifing at sea, boat overturned, can only pick one child of two to try to hold them up, the other must drown, they are twins. Which one? And in the tsunami in Asia recently a woman held 2 children, but could not hold both until rescue, both children old enough to know their likely fate if let go (swept away and probably drowned). What did she do? What about the woman trapped by flood water inside a building with only one small window accessible but water pouring in? Should she push her 5 yr old or her 1 yr old out of the window when she doesn't know whether there is anyone outide close enough to help grab them? Which child is more likely to survive and which should she choose to keep with her not knowing is she can find any way out?
Say one child is in an out-of-control car heading towards a single-parent's other child. Does s/he try to throw her/himself at one child to push them out of the way, potentially leaving them orphans, or does she potentially let one die?
You think it's easy? Of course in the tsunami many people did risk their lives to save someone else's child.
What a darn-fool thing your mother probably had nurses, midwives and doctors to help get you safely into the world. You weren't their child - why on earth did they do what they did, putting themselves to all that effort to help you while you were helpless?
[ link to this | view in chronology ]
Re: Re: Damn Easy!
There must be an app for that, perhaps the automated vehicle could bring up that app for you just prior to the car wreck.
[ link to this | view in chronology ]
Re: Re: Damn Easy!
Get OVER yourself fruitloop!
Will you save your child or the bus full of other children? That is the question, talking about shit not associated with the problem is for children and people with small minds!
[ link to this | view in chronology ]
Re: Re: Re: Damn Easy!
He's riffing on the idea that people will say:
"Think of the children!" (aka: your children)
as a justification towards their decision to put their lives over others.
That being said if someone put the blame of the death on the manufacturers do you think they would switch right quick towards a "greater good" model or you think they would say "pffft, the increase in people buying cars for their own safety will allow us to avoid the drawbacks of greater damage."
[ link to this | view in chronology ]
Re: Re: Re: Re: Damn Easy!
This is satire?
He also started changing the subject in a way that does not fit into the problem. We are talking about self driving cars, not boats or situations where the decision is between two of your own children.
I am directly saying that people expanding the problem, or changing it need to shut up instead and just directly answer the question. Playing the what if game never ends, which is why any AI we create needs to only do 1 things. Protect the occupant above all other things and then avoiding collateral damage as a secondary objective.
There is not enough processing power on the planet to do much more and no... we cannot think of the children because it is a pointless effort. Yes, I realize many stupid people will attempt this anyways because they lack both knowledge and wisdom and instead want to play around with a bunch of what ifs like its a silly game. We are literally discussing life and death here and it shows just how immature humanity is through a lot of these posts.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Damn Easy!
and the people's response is, "No - wat ya gonna do bout it?"
[ link to this | view in chronology ]
Re: Damn Easy!
[ link to this | view in chronology ]
Re: Re: Damn Easy!
[ link to this | view in chronology ]
Morality setting
So the car does its best to save everybody but when it comes down to a choice the selection is already made by the driver.
[ link to this | view in chronology ]
Re: Morality setting
Government controls the setting and if there is an unavoidable accident he cars prioritize the dead of the people the government wants dead first!
It has to be self protective, nothing else!
[ link to this | view in chronology ]
Re: Re: Morality setting
[ link to this | view in chronology ]
Existing federal safety standards already impose this kind of thing on drivers.
Then the government started forcing fuel-economy requirements, and with them came stupid ideas such as "crumple zones." Crumple zones don't really protect the people inside the car that crumples. The real explanation is that NHTSA had decided, in secret, that it was a bad idea to let any driver have a vehicle so solid that he can be confident a collision won't cost him anything. So ever since, they've been forcing us to accept cars made mostly of plastic and other crap instead of solid, heavy metal.
It's time that the public began to fight back by retrofitting cars to be safe again after we buy them, or by keeping old cars in commission or both. Especially if it will also allow us to avoid having black boxes logging our actions for government to snoop on.
It's absolutely rightful for a driver who has the right of way to be capable of bullying those who might violate it.
[ link to this | view in chronology ]
Re: Existing federal safety standards already impose this kind of thing on drivers.
1. Tank, no seat belt: Car impacts object and stops with little to no damage. Passenger continues to move forward until hitting a solid surface in car (steering wheel or windshield). Passenger decelerates over a 1/2" distance.
2. Tank, seat belt: Car impacts object and stops with little to no damage: Passenger is stopped by seat belt. Passenger decelerates over a 1" distance.
3. Crumple zone, no seat belt: Car impacts object and decelerates over a 3' distance. Unfortunately, passenger continues to move forward until impacting stationary car. Passenger then decelerates over 1/2" distance.
4. Crumple zone, seat belt: Car impacts object and decelerates over a 3' distance. Passenger effectively becomes part of the car body due to seat belt and in turn is decelerated over a 3' 1" distance.
Now assuming in all cases, the car was initially moving at 60 mph, what were the G-Forces experienced by each passenger? Using D=1/2AT^2 and V=AT, then solving for A, you get A = V^2/2D.
So the person who decelerates over 1/2" experiences about 2900 Gs of force. Not good at all.
The person who decelerates over 1" experiences about 1450 Gs. Still not good, but better.
The person who decelerates over 3' 1" experiences about 3.2 Gs. Quite survivable.
[ link to this | view in chronology ]
Re: Re: Existing federal safety standards already impose this kind of thing on drivers.
[ link to this | view in chronology ]
Re: Re: Re: Existing federal safety standards already impose this kind of thing on drivers.
[ link to this | view in chronology ]
Re: Re: Existing federal safety standards already impose this kind of thing on drivers.
[ link to this | view in chronology ]
If an automated car had a choice of killing a Republican or Democrat, which one should it kill?
Pro-Lifer or Pro-Choicer?
Christian or Muslim?
Where does it end?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
The interesting question is whether we allow the company building the machines, the government regulating the machines, or the owners of the machines to set the logic.
Currently in a manually-driven vehicle, the driver gets to choose whether to run down the "Republican or Democrat" - but once you code the choices into the autonomous vehicle, the consequences of the choice is passed onto the body that determines the choice logic.
I believe the most responsible option is to pre-program the car with a default option that most people agree on, and allow it to be adjustable by the owner. Obviously, if the owner chooses their own life over certain others - then the owner is responsible for the consequences of their actions. If they program it to favor children over older people, females over males, dogs over cats, etc. then so be it.
[ link to this | view in chronology ]
Re: Re: Re:
It is a fools errand to honestly believe we can code a machine in such a way to that only the least valuable persons are killing in accidents. The variables are more than our total calculated computational power on planet earth.
Over time a system that ONLY prioritizes occupant safety is likely to save the most lives AND cost less to boot!
[ link to this | view in chronology ]
Re: Re: Re: Re:
Everything is corrupt, deal with it.
[ link to this | view in chronology ]
Re: Re: Re:
Except the human would most likely not have access to that information prior to the incident whereas the computer might.
"I believe the most responsible option is to pre-program the car with a default option that most people agree on, "
The most reasonable option is to have a manual over ride
[ link to this | view in chronology ]
Re: Re: Re: Re:
and if most people agree that the black guy should be ran over every time....???
never underestimate how collectively biased any group of people can become.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
An AI would be able to monitor the physics occurring during an accident at a rate impossible for humans that has not been extensively trained.
And this is all of course assuming we have developed an AI capable of out performing a human and I would say NO on that point at this time.
[ link to this | view in chronology ]
Re: Re: Re: Re:
You are assuming that:
1) The 'driver' is paying attention to the outside world, rather than sleeping or or watching a video etc.
2) Even if they are, they see the developing situation in time to take control of the vehicle.
Neither is likely to be true as is hinted at by That tesla crash.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
Similarly in cars, highway cruising is rather monotonous, and even under manual control people, especially with cruise control, become fixated on staying in their lane, and fail to remain aware of what is going on around them. Under fully automatic control with steady running, especially at night, very very few people would remain attentive to what was going on around them. Also, with many emergency situation in a car, you do have a second or more to take control, and by the time you get you hand and feet on the controls it will be too late.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re:
People become bored and inattentive while driving their antique non AI vehicles everyday. They put on quite the display during their daily commute, many are texting, chatting on phone, eating, reading, applying makeup, shaving, or other such things other than paying attention to their primary function - driving the vehicle. Their lack of attention usually while tailgating leads to wrecks costing everyone more for insurance and making the commute very slow.
[ link to this | view in chronology ]
Ethical Automated Cars for all? You bet!
"Oh! Well that's why it was in beta, it's not working right. It saved the single drunk executive instead of Orphan Annie and the Seven Dwarfs like it was supposed to. We'll get RIGHT ON fixing and updating that for our CTO. Now aren't you glad our leaders are running buggy software and finding the problems instead of you?"
Of *course* that's sarcasm. I'm NOT buying a car unless it's programmed to save ME. And everyone else is the same way.
Just react like a driver would and try to choose the best outcome but give the driver priority. If they don't like the way I'm driving they should stay off the sidewalk!
[ link to this | view in chronology ]
[ link to this | view in chronology ]
So the utilitarian model will never happen.
[ link to this | view in chronology ]
The first time I saw that "trolley problem"…
Any rail switch can be left between it's two positions and
then abandoned, guaranteeing a derailment and giving me
enough time to get clear and far away enough to avoid
arrest for damaging replaceable steel instead of making
someone die to protect a corporation's profit margin.
I protect all "victims" over mere property and myself
from those few who prefer that somebody, anybody, die to
keep the trains running on time and profitable. ;]
[ link to this | view in chronology ]
Re: The first time I saw that "trolley problem"…
[ link to this | view in chronology ]
Re: Re: The first time I saw that "trolley problem"…
For a trolley that one assumes truly does have one switch that only switches tracks, there would be a sophisticated camera and computer system to determine when the trolley has to slow down or stop. Not to mention the fact that a trolley set up this way would be going slow enough that if it somehow isn't programed to start an emergency braking upon detecting a stalled vehicle (and the kid on the other track and the fact that the conductor hasn't pulled the switch to change tracks), it would be programed to go at speeds slow enough to dent the side of the bus and then push it along causing moderate damage (broken arms or legs; painful, but life threatening) to the kids inside the bus at most. A painfully slow trolley, but ten miles and hour is still at least twice as fast as a average human walking.
[ link to this | view in chronology ]
Re: Re: Re: The first time I saw that "trolley problem"…
Damn it, sometimes even in proofreading I'll still miss that one little word that changes how an entire sentence is to be read.
[ link to this | view in chronology ]
Re: Re: Re: The first time I saw that "trolley problem"…
If you cannot directly answer a question, you make it clear you are either preparing to deceive or are already participating in a deception!
Sadly far too many people have not developed the maturity to identify these things. You just made yourself an example of that immaturity.
[ link to this | view in chronology ]
There's your problem:
If such a situation occurred in real life, my evasion is
obvious enough that many people would reach it quickly
enough to save everyone and render the "question" moot.
Most people start with the instinct "stop that trolley",
even if it is not presented as an option, and finding the
obvious solution [the switch] literally at hand, set it and run.
That's why hypothetical games don't apply well to reality.
When lives are at stake, nobody thinks about the "purpose"
of choices in front of them. They do or do not. ;]
This is why "ethical programming" for a self-driving car can
only fail. While manufacturers could code various scenarios
they cannot come up with the instincts necessary to counter
impossible problems with impossible solutions that may work.
The most ethical, practical approach for them is to focus on
occupant safety and leave the Kobayashi Marus to the drivers;
even though those drivers may often fail, because sometimes
they will succeed in ways that can't be forseen or programmed.
Besides, keeping the programming simple and robust is insurable.
I'm sure most insurers have already reached the same conclusion.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Well Duh
This is, effectively, one of the greatest reasons that humans have a concept of hypocrisy, and we have the saying 'Few people practice what they preach'
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
http://auschwitz.dk/Kolbe.htm
[ link to this | view in chronology ]
Instead of asking if the glass is half full, or half empty, how about acknowledging that the damned glass is the wrong size?
With that in mind, my other choice would be to start programing the cars to recognized pre-accident conditions (rain, fog, snow, ice, high wind, residential vs highway zones, traffic density, hills, curves, etc.) and give a 10% reduction in speed for each condition recognized, or maybe (as in the case of fog) slow one down until a certain visibility/speed ratio is achieved or in the case of rain or ice ability to stop/foot. Make the cars move slowly enough that appropriate action may be taken, safely, in time, when the paradox arises.
[ link to this | view in chronology ]
I wanna keep reading before I rock this boat.
So, please keep commenting. :)
[ link to this | view in chronology ]
Re: I wanna keep reading before I rock this boat...
Herbert Spencer coined the phrase, “Survival of the Fittest". It is generally accepted as an argumentative Fallacy used in Philosophical circle jerks, while at the same time, survivability has been attributed more and more to a general natural law, wherein that natural law is biologically designed for an entire Species, and not just the one or few mutations that enjoy a capitalist; elitist advantage, and where the survival is directed, or based more or less to The Greater Good of that species. Question; Are Brazillian Army Ants more deserving or better equipped to survive as opposed to your Red Ants, and Black Ants for that matter, we find farming Aphidoidea for their Honey Dew, and protecting them on the plants they eat, storing some in nests for winter, and existing in Ant mutualism. These are two so very different circumstances, but if you were to put these two species models opposite each other, one is certainly going to win that survivability question. The big factor is, what do we put where? Do we bring the Army Ants north? Or do we send our friendly Ants south?
Consider, please, the use of an Atomic weapon at the end of the war in the Pacific, against the Japanese in 1945. Was the response used, justification for the Greater Good of the Species as a whole, or a quick end to a difficult situation? Does the survival of millions outweigh the survival of an initial 140,000 deaths in few seconds of Sun-like wrath on Earth? President Harry S. Truman, warned by some of his advisers that any attempt to invade Japan would result in horrific American casualties, ordered that the new weapon be used to bring the war to a speedy end. Millions survived at the cost of well over one hundred thousand lives, but it was an acceptable, Human-made decision. There was the greater good to consider. These bombings' in Japan, and their ethical justification are still debated.
You are saying to yourself, “Is this guy high? Hung-over? Extreme? No, I am trying to be as objective of the machine that is going to be using the equations in its decision to run you down in the intersection, if you’re waiting for a bus. There are Survival Models (PDF), thirty-eight pages of generally accepted survival models that could be used in determining your fate; my fate; a child’s fate. This new Electronic Brain will not have the luxury of months of consultations with Cabinet Members, friends, or opinions from millions. For a nanosecond, it will be entirely on its own – without your input, and opinion on your net value to the Species.
Now, let us consider what is, or constitutes the greater good. Is the greater good the number of people who do not die because of a human interpretation of the greater good? Or do we shunt the human factor in the decision making and what has the greater value in the decision that we do not want, or wish to make? Should we be, or could we allow ourselves to fuck with the settings of survivability? People, hundreds of people die every day in their autos, and it is always sad, always permanent and always Human Error… take that human out of the list of variables, now it’s the “car’s fault”.
FACT: It is going to be a human decision no matter the method used to make a nanosecond response. In the end, every single loss-of-life accident will come down to one of dozens of laws that are, in truth, human equations used for the electronic brain in determining what survives, and I say what because this decision will be made by a machine, so do not anthropomorphize an electronic brain. What survives will be a function for what does not survive. I do not believe we are capable of sacrifice for the survival of five, ten, or fifty people. No one wants to die – it is programmed into us. It took three and a half billion years, but it is programmed into us all. There are stellar examples of sacrifice at a human level, but one could effectively argue that these individuals are broken in the sense that they are forfeiting their life for the lives of others – their instinct is faulty. It is a strong solid, good argument, but what if… just what if this is actually the Default Setting for a species’ surviving? The needs of the many, outweigh the needs of the few, or the one in this case? Is your life and occupation more important than the five, ten, or twenty people standing at the bus shelter? The computer won’t be thinking of your occupation, or if you are a provider, it is a numbers game. This isn’t Minority Report, and the computer doesn’t proselytize on its action, or pre-determine if one or several of those people waiting for their Trolley are suicidal, homicidal, sociopaths or psychopaths, it will only look at the numbers, and the options. Yes, now we get the options. It doesn’t care if you’re involved in the cure for cancer, or even if you are one of the Mathematicians, or Statisticians who programmed its electronic brain. Does the electronic-machine choose the Bus Stop, or does it drive you into the cement base of a light standard. You have a chance of survival, but the people at the bus stop are guaranteed their survival. The brain will be set automatically and set without all the baggage that is human existence.
It is said that we can react to a situation in two-tenths of a second, but it is my honest opinion, and belief that we operate quite literally at the speed of light. We factor almost everything we have acquired in our existence into a life or death decision, or a decision that will affect any one of us for the rest of our life. Somebody once threw a ball at my head; he was six meters from me. I know for a fact that I moved my head out of the way of that ball, which I saw in my peripheral vision, in less than two-tenths of a second – absolutely everything, every variable was calculated and executed instantly – at the speed of light in a human body, nerve impulses and all. Maybe one-tenth of a second, because that ball, doing eighty kilometers an hour, missed me by three centimeters – an inch, at least. It was funny, but if it had connected, a chain of events would have been set in motion that would have quite literally, altered both our lives, for the rest of our lives… that part I cannot explain to you. It is just the way it is.
We generally make decisions that are the most utilitarian to ourselves, then we consider our significant others, and more often than not, it affects others in profound ways. I have learned this from personal experience. Who do programmers even consult with? Do they analyze the Ted Bundy’s of the automobile industry, or the Asians, or do they take a Woodstock point of view when gathering data for their variables? I would much rather have an electronic-brain make a choice – objective, neutral, considering “at hand” and “to hand” logically; unbiased decision making. But, this point is moot. We have not reached a level of technology where every decision made by an electronic brain will be one hundred percent the best possible outcome for all parties involved. We are only at the “Lesser of two evils” stage in programming, and yet we can’t get past the Greater Good debate.
Honestly, I think we would need quantum level computing capacity the size of a matchbox before we could ever begin to end an argument like this.
The Tesla S did the best it could, given the technology, and every model autonomous car will fail, resulting in a death, or deaths, simply because of technology.
[ link to this | view in chronology ]
Airplanes
[ link to this | view in chronology ]
Re: Airplanes
[ link to this | view in chronology ]
manufacturers don't even try to think about it.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
All you anti-scientific philosophers
The solution to the trolley problem is easily found by the scientific method.
Try the experiment repeatedly and observe what people do.
Solved. :-)
[ link to this | view in chronology ]
Re: All you anti-scientific philosophers
[ link to this | view in chronology ]
Re: Re: All you anti-scientific philosophers
[ link to this | view in chronology ]
Re: All you anti-scientific philosophers
Every single time.
As I.T. Guy put it Sorry but wrong. I'm saving my kid every time. No doubt at all.
This isn't to say that self-driving cars are doomed to become murder machines. It's very possible that self-preservation programming works, or that programming a car never comes down to selection algorithms.
But when choosing between preservation of the whole community, or benefiting the self (including the family) at the expense of the community, the naked ape has a really bad habit of choosing the latter. Every time.
[ link to this | view in chronology ]
Re: Re: All you anti-scientific philosophers
I find it amusing that people are saying things like "No doubt at all" and "No question" when it comes to prioritizing their own child vs. a busload (or city full, or world full) of other people's kids.
The phrase really shows it's an emotional statement, not a reasoned one. "No question" means, literally, that the speaker hasn't thought it about it. Just is reacting emotionally.
[ link to this | view in chronology ]
Remember that every extra decision the car has to make is that much less processing power dedicated to other things, during an emergency situation. If I'm about to crash, I don't want my car to waste its time to determine if one of a thousand possible edge cases is happening. And every extra decision is another opportunity for a coding bug. Not to mention that you're taking the coders' and testers' time away from other things they could be looking at.
[ link to this | view in chronology ]
Re:
let weak, pathetic, slow hu-mans go the hu-man speed limit; robot speed limit is 50 MPH more ! ! !
[ link to this | view in chronology ]
Re: Re: robot speed limit is 50 MPH more ! ! !
Speed limits are to tell fallible humans how fast is "too fast".
Most drivers don't need speed limits at all - one standard way of setting them is to use the 80th percentile speed drivers choose when the road has no marked limit.
It's the < 20% of human drivers that are nuts (drunk, teenagers with hormone poisoning, etc.) who need the speed limit signs.
Self-driving cars shouldn't need *any* speed limits. They should be able to figure out, for themselves, how fast they can safely go.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
It's a strawman...
[ link to this | view in chronology ]
I hate this dumb question
The answer is "immediately brake as hard as they can and keep their lane."
Human drivers are never, ever required to make value judgements about the relative worth of lives while driving and certainly not ever required to make those judgements in the very moments of an accident. They are required to follow the traffic laws, and that's it.
So a robot driver will be programmed to follow the traffic laws, and if something untoward happens on the road, it will be programmed to follow the traffic laws. If there are six pedestrian nuns and a kitten in one direction, and a bus full of refugee orphans in the other, it will be programmed to follow the traffic laws. IT WILL UNDER NO CIRCUMSTANCES BE PROGRAMMED TO TRY AND AVOID AND/OR HIT ANYTHING. IT WILL BE PROGRAMMED TO FOLLOW THE TRAFFIC LAWS AND WILL NEVER EVER BE FAULTED FOR CIRCUMSTANCES NOT UNDER ITS CONTROL SUCH AS THE PARTICULAR ARRANGEMENTS OF NUNS AND ORPHANS AT ANY ARBITRARY MOMENT.
[ link to this | view in chronology ]
Re: I hate this dumb question
If someone jay walked or illegally ran in front of an automated vehicle should the vehicle be programmed to dodge the offender even if at the expense of the driver? If so could that dissuade pedestrians from following the law?
I imagine if the only parties involved in the potential incident were automated vehicles then perhaps all of them would be following the laws? (unless one automated vehicle had a glitch?). But if it's an automated vehicle vs a non-automated vehicle (or one that was following the laws vs one that's not due to a glitch though how is the car going to know that it's not the one with the glitch?) how much weight should be given to the fact that the other party involved is not following the law when trying to make the decision of whom to save?
Also is the other party not following the law arbitrarily or are they not following the law to avoid an accident (which, if so, is probably not against the law ...?)? If automated vehicles didn't give weight to those that arbitrarily don't follow the law it could avoid dissuading other parties involved to not follow the law when encountering an automated vehicle.
How people interact with automated vehicles is partly going to be based on their knowledge of how those vehicles behave and what kinds of decisions they will tend to make under various situations and if the vehicle acts naively those interacting with it might take advantage.
[ link to this | view in chronology ]
Re: Re: I hate this dumb question
"If automated vehicles didn't give weight to those that arbitrarily don't follow the law ..."
should read
"if automated vehicles don't give weight to the fact that another party involved in a traffic exchange is arbitrarily breaking the law ... "
(arbitrarily as in they're not doing it to avoid an accident or injury).
[ link to this | view in chronology ]
Re: Re: I hate this dumb question
If an accident only involved automated vehicles then something happened outside their control (mechanical failure, undetected hazard, etc) and neither would be at fault (no fault accidents do happen). If an accident involved an automated car hitting a human driven car that was not following the law, then the automated car would not be at fault. How much the human is at fault is an entirely different question. No human can be expected to account for the actions of the drivers/pedestrians around them, only their own actions. If your following the law and someone around you does something stupid and you hit them, you're not at fault.
There is no decision about whom to save. It's a red herring. It's pointless to becoming idiotic. Human drivers are never required to make those decisions, so a computer and/or the programmers of said computer need not worry about them either.
Your other questions about how people will interact with automated cars is unrelated to how the car itself should act. A pedestrian who assumes that automatic cars won't run them down will soon find out that, yeah, sometimes they will, and they won't be at fault because the car was following the law, just as if a human was driving and following the law they would not be at fault.
This really is, a stupid, stupid question because we already have a hundred years of experience managing drivers, building traffic law and determining fault in an accident. We have had so many accidents, and court cases over said accidents, and new law related to said accidents that the biggest questions related to fault are in determining exactly what happened during the accident. Once the events have been discovered, the law makes it very clear where the fault lays.
So, you program your automatic car to follow the law and stuff asking questions human drivers wouldn't ever be required to answer.
[ link to this | view in chronology ]
Re: Re: Re: I hate this dumb question
The whole question is idiotic. Automated cars will do the best they can to avoid accidents, just as people do. Period.
The traffic laws have been tweaked for over 100 years - they're pretty good. If everyone follows the rules cars will virtually never smash into each other or pedestrians. In the rare cases where outside factors (mechanical failures, weather, etc.) intervene, the car will simply do the best it can.
There really aren't cases where such choices need to be made, and there's no payback for even worrying about it.
Human drivers don't think about this in accidents - things happen too fast for that.
(Which is why manual override is not a solution.)
Even in the crazy hypotheticals, it just doesn't matter. Automated cars will avoid 99/100 or 999/1000 of the accidents that happen today.
Who the 1 in 100 or 1000 are that don't get saved doesn't matter. What matters is that 99/100 or 999/1000 are saved.
[ link to this | view in chronology ]
Re: Re: Re: Re: I hate this dumb question
Okay, agree with everything else you said except this. Our traffic laws are bunk, many of them are used as corruption bait for officials and revenue producing cops.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: I hate this dumb question
I also agree that automated vehicles shouldn't be held to a higher standard than a normal ordinary human. Requiring automated vehicles to consider these dilemmas is holding them up to a higher standard which is not acceptable since most people that drive don't consider such things.
However that's not to say that these possible dilemmas shouldn't be considered. There should be no legal requirement for automated vehicles to consider them, just like humans aren't required to consider them on a drivers test, but optionally considering them in our discussion of how we think they should make decisions is OK.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: I hate this dumb question
I'm saying if everyone follows the existing rules, crashes are astronomically unlikely.
[ link to this | view in chronology ]
Re: I hate this dumb question
If a driver was faced with a choice of either veering into empty oncoming lanes or striking another vehicle crossing the road at 50mph that had failed to yeild, in fact the law would penalize the driver for choosing to cross into the opposite facing lanes, while at the same time it would not penalize the driver for striking the vehicle that failed to yeild - in fact potentially the driver could come out ahead for doing so (depending on potential insurance/lawsuits).
This is actually one of those subjects that gets asked a lot in denfensive driving courses, and typically the answer given is simply 'given you could be injured in the crash, odds are you're better off swerving and just accepting you got a ticket for doing the right thing'.
But suddenly, when the 'choice' is taken away from us and made by someone else, everyone gets all up in arms and emotional about it. These kinds of decisions ARE made by real people - Im intimately aware of it as my own mother got a serious fine for just such a situation as Ive described. I cant fathom at all why people don't recognize how important it is to have this kind of greater good decision making built into an autonomous vehicle - and how people don't recognize this is just 'moral panics as usual' at work to boot.
[ link to this | view in chronology ]
Re: Re: I hate this dumb question
The dumb question at hand is about somehow making a computer chose which accident to avoid and which accident to deliberately drive itself and passengers into, and then tacks the calculus of death onto that for good measure. This is a dumb question because even if we do decide that autonomous cars are capable enough to allow law breaking as an emergency escape option, as soon as the escape option ceases to be entirely without consequence it will be discarded for the default option of "keep the lane, brake as hard as possible". Computers will never, ever be required to make a decision about who to hit and who to avoid, or whose life is more valuable. If there is no clearly better option (assuming we even allow them the option) they will be programmed to follow the law.
Aside: assuming that we do allow an autonomous car to break the law, I would expect that a requirement for doing that would be for the car to rat itself out immediately and upload all sensor data from around the time of the incident. This would be a powerful tool in redesigning streets to make accidents less likely to begin with. If the same near-accident happens over and over in the same place it might indicate a problem with the road design.
[ link to this | view in chronology ]
Re: Re: Re: I hate this dumb question
The thing is most rules or laws have exceptions or 'superseding rules' under the right circumstances.
For instance a very high ranking superseding rule generally is don't get into an accident. So if you had to break other laws to avoid an accident then, technically speaking, you aren't breaking the law because the exception to those other laws is that you can break them without breaking the law if it's necessary to avoid an accident.
It's like a law that says no crossing a double yellow line except into a driveway. Another exception to that generally is if you need to do it to avoid an accident but, most of the time, that goes without being said because it's implied. After all the purpose of the law is so that we can drive around safely and avoid accidents so if you are in a situation where you need to break less important laws to follow the more important law of not getting into an accident you aren't breaking any laws.
The issue here is that drivers school and the law don't necessarily deal with all of these moral dilemmas. and it's not really for the law to necessarily regulate morality which is why the law doesn't deal with them that extensively.
[ link to this | view in chronology ]
Re: Re: Re: Re: I hate this dumb question
https://en.wikipedia.org/wiki/Trolley_problem
Your job is to operate the lever to ensure that each train heads in the right direction when you suddenly and unexpectedly find yourself in this dilemma. What should you do?
In this situation the automated car is analogous to the lever operator that found itself in a moral dilemma. To answer the question of whether or not the automated car should be legally required to consider morality ahead of time lets consider the standards that would be placed on the lever operator. When he applied for the job did the law require him to first pass a test on such a potential moral dilemma before becoming a lever coordinator? If not then why should the automated vehicle be held to a higher standard?
[ link to this | view in chronology ]
Re: Re: I hate this dumb question
If you break a traffic law in order to avoid an accident, you're extremely unlikely to be penalized. 99% of cops will not issue a ticket in that circumstance (although they could), and 90% of courts will waive the penalty if you explain.
My own wife got out of a speeding ticket by explaining to the court why it was unsafe to stay near a weaving driver.
[ link to this | view in chronology ]
No more "taxed" revenue. City's would no longer get money for speed traps. No more fake driving reasons for police to pull you over to "search" your car. Oh, I saw you swerve. Oh, you were driving to fast/slow. Terry stops for police would be a thing of the past, because if you are drunk your car would drive you. How could they stop you to steal your money now? They police and city would lose out.
No more insurance revenue. If the accident rate falls dramatically. How can they insurance places claim to save you money over the other guys. The margins might be thin. Instead of hand over fist big money.
The police can't allow automated cars to move forward. The big business insurance agencies. The city's that rely on the extra money.
[ link to this | view in chronology ]
One thing about the trolly problem...
Also, a little perspective goes a long way. I'd happily set my car to ethical (e.g. let me die rather than killing more than one other person) if I know I'm significantly safer than if I had a responsible but human driver. That's a risk I'm willing to take.
What is curious is if an ecosystem of prioritize occupants cars or prioritize bystanders cars is significantly safer. I can see in that case the difference in victims falling to tragedy of the commons. But I'd like to think that the annual figures would be counted on one hand, like people killed by armed four-year-olds.
[ link to this | view in chronology ]
Re: One thing about the trolly problem...
If you are going to expand on the topic at least keep it in the same universe?
[ link to this | view in chronology ]
Re: Re: One thing about the trolly problem...
Car fatalities in the human world are looked at in numbers per capita, because there's an awful lot of them, and when we create self-driving vehicles, I assume that they'll still be numerous, only fewer than they are with human drivers, and not few enough that we can list all the incidents on Wikipedia.
But when it comes to incidents in which a car could have saved more or different people by behaving differently, I suspect those scenarios will be few enough to qualify for a Wikipedia list.
When it comes to programming self-driving cars, the question is a matter of diminishing returns. At what point does the additional code to accommodate specific situations cease to prevent accidents or save lives? That is what is going to determine what automated cars will do.
[ link to this | view in chronology ]
That's the first problem: the utilitarian model is not the most moral.
In an ideal world, it (arguably) would be, but in an ideal world it wouldn't be needed anyway, so that's kind of pointless to think about. But we live in the real world, and the real world has hackers, malicious people, and computer security flaws.
In any such world, it's a horrifically immoral act to create functionality whose explicit purpose is to kill the people inside the car and put it into a possibly insecure computer where someone could hack it (or spoof the sensor inputs to make it think it needs to be activated) to murder the people inside!
[ link to this | view in chronology ]
"the utilitarian model is not the most moral [among models of morality]"
I don't know the answer, myself, but wouldn't even dare to suggest that utilitarianism is not it. To the best of my comprehension it's still a candidate.
But since you do dare, I'm interested in your argument.
[ link to this | view in chronology ]
Re: "the utilitarian model is not the most moral [among models of morality]"
The most moral thing for an autonomous vehicle manufacturer to do in this situation is to design the car to always make protecting the inhabitants of the car its highest priority. Creating a way for the car to do otherwise is creating a way for a malicious actor to activate that code and kill people with it, and as numerous IoT security issues have shown us, that's a hacking and computer security is a very real concern.
The "trolley problem", by contrast... well, there's a reason it's known as a thought experiment, rather than a case study.
[ link to this | view in chronology ]
Two simple solutions
- Throw the switch far enough to derail the trolley, but not to complete the transfer to the alternate track. This problem is oversimplified, and presents an incomplete set of parameters and options. And, if your child came to work with you, what is s/he doing out on the tracks? What kind of a parent are you, anyway?
To the autonomous car problem:
- This presumes either incompetent programming or human reaction times or both. Following distance should take into account the stopping distance needed. What's in the adjacent lanes is immaterial.
[ link to this | view in chronology ]
Jamming the switch.
The question becomes even more vague as the victim of action is further removed from the situation. For instance:
~ You're watching the trolley rumbling down towards the five victims from the vantage of a skyway directly above the tracks. Next to you is a very large man. Computing the physics involved, you can push the fat man off the bridge. The trolley will ram into the fat man and stop just in time to spare the five secured victims. Do you?
~ You can pull the lever to rig it to derail the trolley, but again, in your sharp awareness of physics you realize doing so will send the trolley car careening into a backyard, where a gardener will get pulverized and killed. Is that better or worse than sparing the lone victim on trackway #2?
~ As a surgeon (completely different scenario) you meet a stranger in town whose organs, if harvested, would save the lives of five otherwise-able adults waiting for organs to transplant. Without waylaying this stranger, all five transplant patients will die. Do you murder the stranger?
The whole point of the trolley question is not how to find a third option, but to consider at what point is it too immoral to actually change the circumstances, even when the outcome is a net positive (in this case more lives are saved). And yes, we're indulging perfect information, in which we know in advance the outcome of taking action, or not taking action.
The problem with applying the trolley problem (or the ticking time-bomb problem) to the real world at all, is that we seldom have perfect information. We can torture the wrong guy. We can find out that the people we spared were going to die anyway. We can find out the person we killed would have lived to save other lives. It's not applicable to the real world.
[ link to this | view in chronology ]
People are Selfish
If any algorithm for the greater good is in my future car it better not be linked the the spying mechanism or else it will be disabled. I'm just saying that auto manufacturers shouldn't put all of their technology in one basket.
[ link to this | view in chronology ]
Re: People are Selfish
Not really, this is the world over, it's a part of the human condition the same as selfishness.
the government as it currently is would never allow this chance to go by without trying to compromise a system like this and use it for spying or murdering unwanted citizens.
[ link to this | view in chronology ]
However there is one issue that may actually make the 'self-preserving' car the choice to go regardless of what we think: we've seen people programming software with racial/social bias (ie: the code has prejudice embedded by its makers). Considering this, interconnection to hell, when things go wrong each car should try to figure a way to preserve itself.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Sorry but wrong. I'm saving my kid every time. No doubt at all.
[ link to this | view in chronology ]
Really though, with everyone having self driving cars, the roads should be safer as long as you keep humans out of the mix throwing a wrench into things.
[ link to this | view in chronology ]
Re:
And who exactly is doing the programming for these things....those very same "humans" that you want out of the mix. And how do we account for inherent bias in the code...?
[ link to this | view in chronology ]
5 Industries That Artificial Intelligence and Machine Learning Are Transforming
[ link to this | view in chronology ]
Are Autonomous Cars Safe?
Autonomous cars are currently being trialed all over the world and their eventual widespread implementation could revolutionize not only the transport industry, but the way we travel in general. However, recent high-profile accidents involving autonomous vehicles have sparked debates as to how safe driver-less cars really are. Uber has recently put their autonomous vehicle trials on hold after a fatal accident in the US, while Google has put safety drivers in their driver-less cars to ensure that someone is able to take control should things become unsafe. Are Autonomous Cars Safe? https://www.lanner-america.com/blog/autonomous-cars-safe/
[ link to this | view in chronology ]