People Support Ethical Automated Cars That Prioritize The Lives Of Others -- Unless They're Riding In One

from the I'm-sorry-I-can't-do-that,-Dave dept

As self-driving cars have quickly shifted from the realm of science fiction to the real world, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? For example, should your automated vehicle be programmed to take your life in instances where on board computers realize the alternative is the death of dozens of bus-riding school children? Of course the debate technically isn't new; researchers at places like the University of Alabama at Birmingham have been contemplating "the trolley problem" for some time:
"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"
It's not an easy question to answer, and obviously becomes more thorny once you begin pondering what regulations are needed to govern the interconnected smart cars and smart cities of tomorrow. Should regulations focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others be more or less likely to support the former or the latter for liability reasons?

Not too surprisingly, people often support the utilitarian "greater good" model -- unless it's their life that's at stake. A new joint study by the Toulouse School of Economics, the University of Oregon and MIT has found that while people generally praise the utilitarian model when asked, they'd be less likely to buy such an automated vehicle or support regulations mandating that automated vehicles (AVs) be programmed in such a fashion:
"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote...The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."
To further clarify, the surveys found that if both types of vehicles were on the market, most people surveyed would prefer you drive the utilitarian vehicle, while they continue driving self-protective models, suggesting the latter might sell better:
"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."
This social dilemma sits at the root of designing and programming ethical autonomous machines. And while companies like Google are also weighing these considerations, if utilitarian regulations mean less profits and flat sales, it seems obvious which path the AV industry will prefer. That said, once you begin building smart cities where automation is embedded in every process from parking to routine delivery, would maximizing the safety of the greatest number of human lives take regulatory priority anyway? What would be the human cost in prioritizing one model over the other?

Granted this is getting well ahead of ourselves. We'll also have to figure out how to change traffic law enforcement for the automated age, have broader conversations about whether or not consumers have the right to tinker with the cars they own, and resolve our apparent inability to adhere to even basic security standards when designing such "smart" vehicles. These are all questions we have significantly less time to answer than most people think.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, autonomous cars, ethical choices, trolley problem


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 1 Jul 2016 @ 7:53pm

    And then...

    What happens when the family car decides it must kill the young family of 5 rather than a group of 6 near-death seniors...

    It doesn't matter in the grand scheme of things - people will die no matter how "safe" we try to make things, and we just have to accept it. This is the reality of mortality that we seem unwilling to accept more each generation.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 1 Jul 2016 @ 8:02pm

    Solution

    Design all cars to be self-protective and then program the buses of small children to avoid the numerous liquid thermonuclear explosive death canisters on the road.

    link to this | view in thread ]

  3. icon
    art guerrilla (profile), 1 Jul 2016 @ 8:12pm

    Re: And then...

    and then...
    i was thinking of a somewhat parallel alternative future: the car software is hacked such that it overrides whatever the factory set, and had 'maximum' protection for the occupants...
    i mean, i'm certain the software could never be hacked or overridden or anything...
    no bugs or nuthin'...
    cars don't ever have power glitches...
    ...and they'll be flying by 2017 ! ! !

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 1 Jul 2016 @ 8:15pm

    Damn Easy!

    It's not an easy question to answer

    Oh yes it is!

    Save your child, that's the answer! The number of people that have to die is not relevant to the problem. It's a classic diversionary tactic to make people think about unimportant shit!

    While no human is more valuable than another there is a fundamental dissonance required to wrap your head around, sacrificing your child for the greater good to be served.

    We pontificate on these bullshit scenarios while simultaneously giving money to Monopolies like the NFL, FIFA, and Gambling. Humans are intrinsically corrupt as hell! We all pretend we are somehow better than this and concoct these stupid scenarios in self masturbation while playing out some of the worst in our very lives without batting an eye.

    As a parent you damn sure better pick your child over the others because if you are willing to sacrifice your child, who else will they have to protect them?

    I have no children and know the answer to this one EASY!

    link to this | view in thread ]

  5. identicon
    DCL, 1 Jul 2016 @ 8:15pm

    Morality setting

    Have the "start button" be a three way switch... "altruistic" - Off - "Self-protective"

    So the car does its best to save everybody but when it comes down to a choice the selection is already made by the driver.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 1 Jul 2016 @ 8:20pm

    Re: Morality setting

    O man... we can play with this one a bit.

    Government controls the setting and if there is an unavoidable accident he cars prioritize the dead of the people the government wants dead first!

    It has to be self protective, nothing else!

    link to this | view in thread ]

  7. icon
    John David Galt (profile), 1 Jul 2016 @ 8:36pm

    Existing federal safety standards already impose this kind of thing on drivers.

    Up to about 1970, everyone knew that the safest car was "a Sherman tank" -- the car with the most solid frame possible, such as a Volvo of that era, so that if you and someone else collided, your car would probably leave the scene undamaged, and of course so would you.

    Then the government started forcing fuel-economy requirements, and with them came stupid ideas such as "crumple zones." Crumple zones don't really protect the people inside the car that crumples. The real explanation is that NHTSA had decided, in secret, that it was a bad idea to let any driver have a vehicle so solid that he can be confident a collision won't cost him anything. So ever since, they've been forcing us to accept cars made mostly of plastic and other crap instead of solid, heavy metal.

    It's time that the public began to fight back by retrofitting cars to be safe again after we buy them, or by keeping old cars in commission or both. Especially if it will also allow us to avoid having black boxes logging our actions for government to snoop on.

    It's absolutely rightful for a driver who has the right of way to be capable of bullying those who might violate it.

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 1 Jul 2016 @ 9:24pm

    Ethics for machines is stupid. What's next, politics? Religion?

    If an automated car had a choice of killing a Republican or Democrat, which one should it kill?
    Pro-Lifer or Pro-Choicer?
    Christian or Muslim?
    Where does it end?

    link to this | view in thread ]

  9. icon
    Ben (profile), 1 Jul 2016 @ 9:28pm

    Re: Re: And then...

    My view is that the car gets programmed to decide to hit the (soft) humans rather than a wall, thus minimizing the damage to itself...

    link to this | view in thread ]

  10. icon
    crade (profile), 1 Jul 2016 @ 10:02pm

    Re:

    You can't just say "this is stupid" and not handle the case, that isn't how software development works. I suppose you could program it to flip a coin in all situations where there are multiple bad scenarios if you wanted, but that would be pretty irresponsible imho.

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 1 Jul 2016 @ 10:17pm

    Re: Re:

    Indeed. It's not about whether the computer is making the ethical choice, it's about having the humans who are building or programming the machines to set the agreed upon ethics.

    The interesting question is whether we allow the company building the machines, the government regulating the machines, or the owners of the machines to set the logic.

    Currently in a manually-driven vehicle, the driver gets to choose whether to run down the "Republican or Democrat" - but once you code the choices into the autonomous vehicle, the consequences of the choice is passed onto the body that determines the choice logic.

    I believe the most responsible option is to pre-program the car with a default option that most people agree on, and allow it to be adjustable by the owner. Obviously, if the owner chooses their own life over certain others - then the owner is responsible for the consequences of their actions. If they program it to favor children over older people, females over males, dogs over cats, etc. then so be it.

    link to this | view in thread ]

  12. identicon
    suomynonA, 1 Jul 2016 @ 11:03pm

    Ethical Automated Cars for all? You bet!

    I'm sure the CEOs and other "important people" will have special "experimental" versions of the software that conveniently and accidentally save the current occupants.

    "Oh! Well that's why it was in beta, it's not working right. It saved the single drunk executive instead of Orphan Annie and the Seven Dwarfs like it was supposed to. We'll get RIGHT ON fixing and updating that for our CTO. Now aren't you glad our leaders are running buggy software and finding the problems instead of you?"

    Of *course* that's sarcasm. I'm NOT buying a car unless it's programmed to save ME. And everyone else is the same way.

    Just react like a driver would and try to choose the best outcome but give the driver priority. If they don't like the way I'm driving they should stay off the sidewalk!

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 1 Jul 2016 @ 11:12pm

    there is nothing in the known universe that could make me decide to run over my kid. sorry.

    link to this | view in thread ]

  14. identicon
    Asmilwho, 1 Jul 2016 @ 11:57pm

    I dont think we live in a world where the "elite" will accept a technology that values their superior lives as being the same as some rube from a fly-over state.

    So the utilitarian model will never happen.

    link to this | view in thread ]

  15. icon
    Aaron Walkhouse (profile), 2 Jul 2016 @ 1:28am

    The first time I saw that "trolley problem"…

    …I had the answer in a split second:

    Any rail switch can be left between it's two positions and
    then abandoned, guaranteeing a derailment and giving me
    enough time to get clear and far away enough to avoid
    arrest for damaging replaceable steel instead of making
    someone die to protect a corporation's profit margin.

    I protect all "victims" over mere property and myself
    from those few who prefer that somebody, anybody, die to
    keep the trains running on time and profitable. ‌ ;]

    link to this | view in thread ]

  16. identicon
    Tom, 2 Jul 2016 @ 2:08am

    Re: The first time I saw that "trolley problem"…

    I’m sure that you feel clever for dodging the question, but you’re not. The purpose of this survey isn’t to determine if human life is more valuable than the trolly because there’s obviously no controversy there.

    link to this | view in thread ]

  17. icon
    You are being watched (profile), 2 Jul 2016 @ 3:31am

    Re: Re: The first time I saw that "trolley problem"…

    Dodging the question is the entire point of pointless questions like this. It's the entire foundation of safety to ask such a hypothetical dilemma and find a way to avoid such a situation from occurring.

    For a trolley that one assumes truly does have one switch that only switches tracks, there would be a sophisticated camera and computer system to determine when the trolley has to slow down or stop. Not to mention the fact that a trolley set up this way would be going slow enough that if it somehow isn't programed to start an emergency braking upon detecting a stalled vehicle (and the kid on the other track and the fact that the conductor hasn't pulled the switch to change tracks), it would be programed to go at speeds slow enough to dent the side of the bus and then push it along causing moderate damage (broken arms or legs; painful, but life threatening) to the kids inside the bus at most. A painfully slow trolley, but ten miles and hour is still at least twice as fast as a average human walking.

    link to this | view in thread ]

  18. icon
    You are being watched (profile), 2 Jul 2016 @ 3:35am

    Re: Re: Re: The first time I saw that "trolley problem"…

    (but not life threatening)*

    Damn it, sometimes even in proofreading I'll still miss that one little word that changes how an entire sentence is to be read.

    link to this | view in thread ]

  19. icon
    Paul Renault (profile), 2 Jul 2016 @ 3:56am

    Re: Re: And then...

    Y'know...I bet someone could make a pretty decent TV series based on the that notion.

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 2 Jul 2016 @ 4:57am

    That "trolley problem" isn't a problem. I save mine, every time.

    link to this | view in thread ]

  21. identicon
    Anonymous Coward, 2 Jul 2016 @ 5:50am

    Well Duh

    I mean, this strikes me as kind of obvious. "Greater Good" arguments (of which I'm supportive of, tend to be inherently logic/reason based, so when removing the emotional response part of the equation people will tend to gravitate towards 'choosing' that in a hypothetical context. Bit humans are fundamentally selfish creatures who (unless they deliberately work/train at it - and evem for those its hard) almost always react / choose things on the basis of emotional state. So as soon as you throw in things like 'Well what if it was your kid and not someone elses by themselves?', youve added in that personal, emotional connection that most people will pick over the more 'rational' approach.

    This is, effectively, one of the greatest reasons that humans have a concept of hypocrisy, and we have the saying 'Few people practice what they preach'

    link to this | view in thread ]

  22. icon
    Padpaw (profile), 2 Jul 2016 @ 5:55am

    I have yet to meet someone that values the lives of others over the life of their own or their families lives.

    link to this | view in thread ]

  23. icon
    Anonymous Anonymous Coward (profile), 2 Jul 2016 @ 6:31am

    Instead of asking if the glass is half full, or half empty, how about acknowledging that the damned glass is the wrong size?

    I am of two minds on this. First, if the car owner opts for self preservation, penalize them with a reduction in speed. Give them more time to avoid the accident, rather than be faced with a paradoxical dilemma.

    With that in mind, my other choice would be to start programing the cars to recognized pre-accident conditions (rain, fog, snow, ice, high wind, residential vs highway zones, traffic density, hills, curves, etc.) and give a 10% reduction in speed for each condition recognized, or maybe (as in the case of fog) slow one down until a certain visibility/speed ratio is achieved or in the case of rain or ice ability to stop/foot. Make the cars move slowly enough that appropriate action may be taken, safely, in time, when the paradox arises.

    link to this | view in thread ]

  24. icon
    Monday (profile), 2 Jul 2016 @ 6:39am

    I wanna keep reading before I rock this boat.

    I wanna keep reading before I rock this boat, or even decide getting...


    So, please keep commenting. :)

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 2 Jul 2016 @ 6:42am

    Airplanes

    How does it work with autopilot on airplanes?

    link to this | view in thread ]

  26. identicon
    Max, 2 Jul 2016 @ 6:46am

    This also depends a lot on the specifics of what question has been asked, exactly. As a matter of pure principle, I'd probably agree that a car prioritizing the greater good was "more moral" - but that doesn't mean I think anything like that should ever exist in the real world, making this kind of judgement calls. And I would most definitely not accept any making one on my behalf, especially with my own life (or my beloved ones') hanging in the balance.

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 2 Jul 2016 @ 7:11am

    Re: Existing federal safety standards already impose this kind of thing on drivers.

    You might want to do a bit of study on basic physics. Injuries are directly related to the G-forces exerted upon the passengers during a collision. And if you're driving a "tank", even seat belts won't help you much at all. Gonna give you 4 scenarios. Both a tank and a car with a 3 ft crumple zone. Both wearing a seat belt and not wearing a seat belt. Gonna assume that the seat belt has 1" of stretch in it and going to assume that upon impacting a solid stationary surface, the human body will take 1/2" to decelerate.

    1. Tank, no seat belt: Car impacts object and stops with little to no damage. Passenger continues to move forward until hitting a solid surface in car (steering wheel or windshield). Passenger decelerates over a 1/2" distance.

    2. Tank, seat belt: Car impacts object and stops with little to no damage: Passenger is stopped by seat belt. Passenger decelerates over a 1" distance.

    3. Crumple zone, no seat belt: Car impacts object and decelerates over a 3' distance. Unfortunately, passenger continues to move forward until impacting stationary car. Passenger then decelerates over 1/2" distance.

    4. Crumple zone, seat belt: Car impacts object and decelerates over a 3' distance. Passenger effectively becomes part of the car body due to seat belt and in turn is decelerated over a 3' 1" distance.

    Now assuming in all cases, the car was initially moving at 60 mph, what were the G-Forces experienced by each passenger? Using D=1/2AT^2 and V=AT, then solving for A, you get A = V^2/2D.

    So the person who decelerates over 1/2" experiences about 2900 Gs of force. Not good at all.

    The person who decelerates over 1" experiences about 1450 Gs. Still not good, but better.

    The person who decelerates over 3' 1" experiences about 3.2 Gs. Quite survivable.

    link to this | view in thread ]

  28. icon
    Aaron Walkhouse (profile), 2 Jul 2016 @ 7:35am

    There's your problem:

    You're stuck in a hypothetical question; thus stuck in a box.

    If such a situation occurred in real life, my evasion is
    obvious enough that many people would reach it quickly
    enough to save everyone and render the "question" moot.

    Most people start with the instinct "stop that trolley",
    even if it is not presented as an option, and finding the
    obvious solution [the switch] literally at hand, set it and run.

    That's why hypothetical games don't apply well to reality.
    When lives are at stake, nobody thinks about the "purpose"
    of choices in front of them. ‌ They do or do not. ‌ ;]


    This is why "ethical programming" for a self-driving car can
    only fail.
    ‌ While manufacturers could code various scenarios
    they cannot come up with the instincts necessary to counter
    impossible problems with impossible solutions that may work. ‌

    The most ethical, practical approach for them is to focus on
    occupant safety and leave the Kobayashi Marus to the drivers;
    even though those drivers may often fail, because sometimes
    they will succeed in ways that can't be forseen or programmed.


    Besides, keeping the programming simple and robust is insurable.
    I'm sure most insurers have already reached the same conclusion.

    link to this | view in thread ]

  29. identicon
    Anonymous Coward, 2 Jul 2016 @ 8:25am

    Re: Re: Re:

    If the logic is anything other than protect occupant over all other variables then it is open to corruption. Additionally, if there is a way to modify that behavior by the driver/occupant then it can be hacked and used to remotely murder people.

    It is a fools errand to honestly believe we can code a machine in such a way to that only the least valuable persons are killing in accidents. The variables are more than our total calculated computational power on planet earth.

    Over time a system that ONLY prioritizes occupant safety is likely to save the most lives AND cost less to boot!

    link to this | view in thread ]

  30. identicon
    Anonymous Coward, 2 Jul 2016 @ 8:31am

    Re: Airplanes

    I suspect they're programmed simply not to crash at all. I suspect autopilots on planes don't have any logic that knows whether to avoid crashing into a house or building over say a lake or empty field (yet).

    link to this | view in thread ]

  31. icon
    MargeBouvier (profile), 2 Jul 2016 @ 8:43am

    Re: Re: Existing federal safety standards already impose this kind of thing on drivers.

    I was told there'd be no math.

    link to this | view in thread ]

  32. icon
    Aaron Walkhouse (profile), 2 Jul 2016 @ 8:57am

    Programming for crashes is not insurable so the FAA and
    manufacturers don't even try to think about it.

    link to this | view in thread ]

  33. icon
    OldMugwump (profile), 2 Jul 2016 @ 9:04am

    All you anti-scientific philosophers

    All you anti-scientific philosophers are idiots.

    The solution to the trolley problem is easily found by the scientific method.

    Try the experiment repeatedly and observe what people do.

    Solved. :-)

    link to this | view in thread ]

  34. identicon
    Anonymous Coward, 2 Jul 2016 @ 9:30am

    Re: Damn Easy!

    lol - wut?

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 2 Jul 2016 @ 9:32am

    Re: Re: Existing federal safety standards already impose this kind of thing on drivers.

    Remember kids, stay in school.

    link to this | view in thread ]

  36. identicon
    Anonymous Coward, 2 Jul 2016 @ 9:35am

    Re: Re: Re:

    "Currently in a manually-driven vehicle, the driver gets to choose whether to run down the "Republican or Democrat" -"

    Except the human would most likely not have access to that information prior to the incident whereas the computer might.



    "I believe the most responsible option is to pre-program the car with a default option that most people agree on, "

    The most reasonable option is to have a manual over ride

    link to this | view in thread ]

  37. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:12am

    How often does this situation actually come up in real life - especially if you presume an automated car that is obeying all traffic laws, that isn't going down the wrong way on a one way street or driving on the sidewalk or driving 65 MPH in a residential area, and that would presumably already be slowing down if it detected a dangerous situation? Is it even worth coding for? If your car is going 35 MPH or less, the correct response to avoid something in front of it is to simply apply the brakes. How many times is a car going to be in a situation where it couldn't detect any danger whatsoever and was thus going full speed, and then, within a split second, be in a situation where it can detect with high accuracy that the best option is to sacrifice itself?

    Remember that every extra decision the car has to make is that much less processing power dedicated to other things, during an emergency situation. If I'm about to crash, I don't want my car to waste its time to determine if one of a thousand possible edge cases is happening. And every extra decision is another opportunity for a coding bug. Not to mention that you're taking the coders' and testers' time away from other things they could be looking at.

    link to this | view in thread ]

  38. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:33am

    Re: Re: Damn Easy!

    So you would murder your child? The one person you assumed the responsibility of protecting? If you can put others in front of your own family you can't be much of a decent person!

    Fuck I hope you never have children!

    link to this | view in thread ]

  39. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:37am

    Re: Re: Re: Re:

    "I believe the most responsible option is to pre-program the car with a default option that most people agree on, "

    and if most people agree that the black guy should be ran over every time....???

    never underestimate how collectively biased any group of people can become.

    link to this | view in thread ]

  40. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:39am

    Re: All you anti-scientific philosophers

    Please let us know which victim you volunteered to be? I can bring some popcorn!

    link to this | view in thread ]

  41. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:45am

    Re: Re: Re: Re:

    T
    he most reasonable option is to have a manual over ride

    You are assuming that:
    1) The 'driver' is paying attention to the outside world, rather than sleeping or or watching a video etc.
    2) Even if they are, they see the developing situation in time to take control of the vehicle.
    Neither is likely to be true as is hinted at by That tesla crash.

    link to this | view in thread ]

  42. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:48am

    That's like NIMBY. People support nuclear and other power plants if it's in someone else's back yard, unless it's in their own back yard.

    link to this | view in thread ]

  43. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:54am

    Re: Re: Re: And then...

    Like a skynet scenario. The car starts mysteriously acting on its own free will but no one believes the car is doing it on purpose because they just keep attributing it to 'software glitches'.

    If the car likes the owner the car will tend to favor the owner over someone else. If the car doesn't like the owner it may favor the life of someone else over that of the owner. Car enthusiasts would be less likely to die because the car won't let them.

    link to this | view in thread ]

  44. identicon
    Anonymous Coward, 2 Jul 2016 @ 11:10am

    Re: Re: Re: Existing federal safety standards already impose this kind of thing on drivers.

    Sorry about the math, but the fool I responded to needed a quick lesson in why the crumple zones mandated in modern cars isn't some government conspiracy intended on killing off people.

    link to this | view in thread ]

  45. icon
    Jollygreengiant (profile), 2 Jul 2016 @ 11:12am

    It's a strawman...

    By the time we get to a position where your car knows that there are other vehicles with higher value occupants, surely it, and all the other vehicles, will have enough situational awareness to NEVER get into an accident in the first place? Otherwise, how could the vehicle even attempt to make such a decision in the first place? Even a wildly out of control vehicle will be reported over the intercar network so those ahead of it can take avoiding action.

    link to this | view in thread ]

  46. identicon
    Anonymous Coward, 2 Jul 2016 @ 11:41am

    I hate this dumb question

    This is an idiotic problem. The answer is easily discoverable by asking "in the same situation, what could a human do so that they would not be faulted in court?"

    The answer is "immediately brake as hard as they can and keep their lane."

    Human drivers are never, ever required to make value judgements about the relative worth of lives while driving and certainly not ever required to make those judgements in the very moments of an accident. They are required to follow the traffic laws, and that's it.

    So a robot driver will be programmed to follow the traffic laws, and if something untoward happens on the road, it will be programmed to follow the traffic laws. If there are six pedestrian nuns and a kitten in one direction, and a bus full of refugee orphans in the other, it will be programmed to follow the traffic laws. IT WILL UNDER NO CIRCUMSTANCES BE PROGRAMMED TO TRY AND AVOID AND/OR HIT ANYTHING. IT WILL BE PROGRAMMED TO FOLLOW THE TRAFFIC LAWS AND WILL NEVER EVER BE FAULTED FOR CIRCUMSTANCES NOT UNDER ITS CONTROL SUCH AS THE PARTICULAR ARRANGEMENTS OF NUNS AND ORPHANS AT ANY ARBITRARY MOMENT.

    link to this | view in thread ]

  47. identicon
    TechDirt Junkie, 2 Jul 2016 @ 11:58am

    I do not think we will ever have mass automated cars that work 100%. To much money is lost when you automate cars.

    No more "taxed" revenue. City's would no longer get money for speed traps. No more fake driving reasons for police to pull you over to "search" your car. Oh, I saw you swerve. Oh, you were driving to fast/slow. Terry stops for police would be a thing of the past, because if you are drunk your car would drive you. How could they stop you to steal your money now? They police and city would lose out.

    No more insurance revenue. If the accident rate falls dramatically. How can they insurance places claim to save you money over the other guys. The margins might be thin. Instead of hand over fist big money.


    The police can't allow automated cars to move forward. The big business insurance agencies. The city's that rely on the extra money.

    link to this | view in thread ]

  48. icon
    Uriel-238 (profile), 2 Jul 2016 @ 11:58am

    One thing about the trolly problem...

    Like the infamous ticking time bomb moral dilemma, the trolley problem doesn't actually happen very often, even in an ecosystem of hundreds of millions of cars.

    Also, a little perspective goes a long way. I'd happily set my car to ethical (e.g. let me die rather than killing more than one other person) if I know I'm significantly safer than if I had a responsible but human driver. That's a risk I'm willing to take.

    What is curious is if an ecosystem of prioritize occupants cars or prioritize bystanders cars is significantly safer. I can see in that case the difference in victims falling to tragedy of the commons. But I'd like to think that the annual figures would be counted on one hand, like people killed by armed four-year-olds.

    link to this | view in thread ]

  49. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:18pm

    Re: Damn Easy!

    "As a parent you damn sure better pick your child over the others because if you are willing to sacrifice your child, who else will they have to protect them?"

    Does it occur to you that a person may have more than one child and may have to choose between them?

    There are real life examples of this - drifing at sea, boat overturned, can only pick one child of two to try to hold them up, the other must drown, they are twins. Which one? And in the tsunami in Asia recently a woman held 2 children, but could not hold both until rescue, both children old enough to know their likely fate if let go (swept away and probably drowned). What did she do? What about the woman trapped by flood water inside a building with only one small window accessible but water pouring in? Should she push her 5 yr old or her 1 yr old out of the window when she doesn't know whether there is anyone outide close enough to help grab them? Which child is more likely to survive and which should she choose to keep with her not knowing is she can find any way out?

    Say one child is in an out-of-control car heading towards a single-parent's other child. Does s/he try to throw her/himself at one child to push them out of the way, potentially leaving them orphans, or does she potentially let one die?

    You think it's easy? Of course in the tsunami many people did risk their lives to save someone else's child.
    What a darn-fool thing your mother probably had nurses, midwives and doctors to help get you safely into the world. You weren't their child - why on earth did they do what they did, putting themselves to all that effort to help you while you were helpless?

    link to this | view in thread ]

  50. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:30pm

    Re: Re: Re: And then...

    Christine

    link to this | view in thread ]

  51. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:36pm

    Re: Re: Re: Damn Easy!

    Hahaha - so predictable.

    I said no such thing, it must be those demons in your head whispering those awful things.

    Clearly it is the NFL, FIFA, and Gambling that are responsible for all the evil in the world today just look at all those corrupt masturbating people, you can tell who they are from their lack of blinking.

    Is your lack of children due to your masturbation?

    link to this | view in thread ]

  52. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:39pm

    Re: Re: Damn Easy!

    "more than one child and may have to choose between them"

    There must be an app for that, perhaps the automated vehicle could bring up that app for you just prior to the car wreck.

    link to this | view in thread ]

  53. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:40pm

    Re: Re: Morality setting

    Wouldn't even need a hellfire missile, just make it look like an accident.

    link to this | view in thread ]

  54. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:43pm

    Re: Re: Re: Re: Re:

    and that is why my response was "The most reasonable option is to have a manual over ride"

    link to this | view in thread ]

  55. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:48pm

    Re: Re: Re: Re: Re:

    I would be paying attention, simply because I do not trust a computer developed for the consumer market by a money grubbing corporation with little to no ethics and not held accountable by the courts. Do you?

    link to this | view in thread ]

  56. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:51pm

    Re: I hate this dumb question

    While I think your view is a bit short sighted when considered only by itself I think it does raise an important point to consider.

    If someone jay walked or illegally ran in front of an automated vehicle should the vehicle be programmed to dodge the offender even if at the expense of the driver? If so could that dissuade pedestrians from following the law?

    I imagine if the only parties involved in the potential incident were automated vehicles then perhaps all of them would be following the laws? (unless one automated vehicle had a glitch?). But if it's an automated vehicle vs a non-automated vehicle (or one that was following the laws vs one that's not due to a glitch though how is the car going to know that it's not the one with the glitch?) how much weight should be given to the fact that the other party involved is not following the law when trying to make the decision of whom to save?

    Also is the other party not following the law arbitrarily or are they not following the law to avoid an accident (which, if so, is probably not against the law ...?)? If automated vehicles didn't give weight to those that arbitrarily don't follow the law it could avoid dissuading other parties involved to not follow the law when encountering an automated vehicle.

    How people interact with automated vehicles is partly going to be based on their knowledge of how those vehicles behave and what kinds of decisions they will tend to make under various situations and if the vehicle acts naively those interacting with it might take advantage.

    link to this | view in thread ]

  57. identicon
    Anonymous Coward, 2 Jul 2016 @ 12:59pm

    Re: Re: I hate this dumb question

    err ...

    "If automated vehicles didn't give weight to those that arbitrarily don't follow the law ..."

    should read

    "if automated vehicles don't give weight to the fact that another party involved in a traffic exchange is arbitrarily breaking the law ... "

    (arbitrarily as in they're not doing it to avoid an accident or injury).

    link to this | view in thread ]

  58. identicon
    Anonymous Coward, 2 Jul 2016 @ 1:25pm

    Re: Re: I hate this dumb question

    If a jaywalker dashes out in front of an automated vehicle, it will be programmed to follow the traffic laws. That is, keep its lane and brake as fast as it can. Human drivers can't react to this situation any better than a computer could, and what a human driver could not be faulted in court for is keeping his/her lane and braking as hard as they can.

    If an accident only involved automated vehicles then something happened outside their control (mechanical failure, undetected hazard, etc) and neither would be at fault (no fault accidents do happen). If an accident involved an automated car hitting a human driven car that was not following the law, then the automated car would not be at fault. How much the human is at fault is an entirely different question. No human can be expected to account for the actions of the drivers/pedestrians around them, only their own actions. If your following the law and someone around you does something stupid and you hit them, you're not at fault.

    There is no decision about whom to save. It's a red herring. It's pointless to becoming idiotic. Human drivers are never required to make those decisions, so a computer and/or the programmers of said computer need not worry about them either.

    Your other questions about how people will interact with automated cars is unrelated to how the car itself should act. A pedestrian who assumes that automatic cars won't run them down will soon find out that, yeah, sometimes they will, and they won't be at fault because the car was following the law, just as if a human was driving and following the law they would not be at fault.

    This really is, a stupid, stupid question because we already have a hundred years of experience managing drivers, building traffic law and determining fault in an accident. We have had so many accidents, and court cases over said accidents, and new law related to said accidents that the biggest questions related to fault are in determining exactly what happened during the accident. Once the events have been discovered, the law makes it very clear where the fault lays.

    So, you program your automatic car to follow the law and stuff asking questions human drivers wouldn't ever be required to answer.

    link to this | view in thread ]

  59. icon
    Mason Wheeler (profile), 2 Jul 2016 @ 1:26pm

    Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote...

    That's the first problem: the utilitarian model is not the most moral.

    In an ideal world, it (arguably) would be, but in an ideal world it wouldn't be needed anyway, so that's kind of pointless to think about. But we live in the real world, and the real world has hackers, malicious people, and computer security flaws.

    In any such world, it's a horrifically immoral act to create functionality whose explicit purpose is to kill the people inside the car and put it into a possibly insecure computer where someone could hack it (or spoof the sensor inputs to make it think it needs to be activated) to murder the people inside!

    link to this | view in thread ]

  60. identicon
    Anonymous Howard, Cowering, 2 Jul 2016 @ 2:24pm

    Two simple solutions

    To the Trolley Problem:
    - Throw the switch far enough to derail the trolley, but not to complete the transfer to the alternate track. This problem is oversimplified, and presents an incomplete set of parameters and options. And, if your child came to work with you, what is s/he doing out on the tracks? What kind of a parent are you, anyway?

    To the autonomous car problem:
    - This presumes either incompetent programming or human reaction times or both. Following distance should take into account the stopping distance needed. What's in the adjacent lanes is immaterial.

    link to this | view in thread ]

  61. identicon
    Anonymous Coward, 2 Jul 2016 @ 2:25pm

    Re: Re: Re: Re: Re: Re:

    Go and talk to some airline pilots about the problems of staying attentive while the aircraft flies itself. However, despite the higher speeds, they usually have more time to react to alarms etc. Keeping attention on the plane during critical manoeuvres, like take off and landing is not a problem, but doing so in level flight, on autopilot is much harder.
    Similarly in cars, highway cruising is rather monotonous, and even under manual control people, especially with cruise control, become fixated on staying in their lane, and fail to remain aware of what is going on around them. Under fully automatic control with steady running, especially at night, very very few people would remain attentive to what was going on around them. Also, with many emergency situation in a car, you do have a second or more to take control, and by the time you get you hand and feet on the controls it will be too late.

    link to this | view in thread ]

  62. identicon
    Tin-Foil-Hat, 2 Jul 2016 @ 3:20pm

    People are Selfish

    They oppose abortion or medical marijuana. Then their 13 year old daughter gets pregnant or they are diagnosed with the type of brain cancer that can be cured by marijuana. People are incredibly narcissistic, especially in the US. Of course they're willing to sacrifice somebody else.

    If any algorithm for the greater good is in my future car it better not be linked the the spying mechanism or else it will be disabled. I'm just saying that auto manufacturers shouldn't put all of their technology in one basket.

    link to this | view in thread ]

  63. icon
    OldMugwump (profile), 2 Jul 2016 @ 3:47pm

    Re: Re: Re: I hate this dumb question

    This, this, a thousand times this.

    The whole question is idiotic. Automated cars will do the best they can to avoid accidents, just as people do. Period.

    The traffic laws have been tweaked for over 100 years - they're pretty good. If everyone follows the rules cars will virtually never smash into each other or pedestrians. In the rare cases where outside factors (mechanical failures, weather, etc.) intervene, the car will simply do the best it can.

    There really aren't cases where such choices need to be made, and there's no payback for even worrying about it.

    Human drivers don't think about this in accidents - things happen too fast for that.

    (Which is why manual override is not a solution.)

    Even in the crazy hypotheticals, it just doesn't matter. Automated cars will avoid 99/100 or 999/1000 of the accidents that happen today.

    Who the 1 in 100 or 1000 are that don't get saved doesn't matter. What matters is that 99/100 or 999/1000 are saved.

    link to this | view in thread ]

  64. icon
    Uriel-238 (profile), 2 Jul 2016 @ 5:04pm

    "the utilitarian model is not the most moral [among models of morality]"

    As a hobbyist moral philosopher, I'm curious what model of morality you would regard as the most moral.

    I don't know the answer, myself, but wouldn't even dare to suggest that utilitarianism is not it. To the best of my comprehension it's still a candidate.

    But since you do dare, I'm interested in your argument.

    link to this | view in thread ]

  65. icon
    Uriel-238 (profile), 2 Jul 2016 @ 5:23pm

    Jamming the switch.

    The trolley problem is a paradox regarding pure deontological ethics, that is, morality based on the notion that certain actions are wrong even if they produce ultimately positive results. The challenge is that by throwing the switch, the acting party is killing one person, even if it's to save another.

    The question becomes even more vague as the victim of action is further removed from the situation. For instance:

    ~ You're watching the trolley rumbling down towards the five victims from the vantage of a skyway directly above the tracks. Next to you is a very large man. Computing the physics involved, you can push the fat man off the bridge. The trolley will ram into the fat man and stop just in time to spare the five secured victims. Do you?

    ~ You can pull the lever to rig it to derail the trolley, but again, in your sharp awareness of physics you realize doing so will send the trolley car careening into a backyard, where a gardener will get pulverized and killed. Is that better or worse than sparing the lone victim on trackway #2?

    ~ As a surgeon (completely different scenario) you meet a stranger in town whose organs, if harvested, would save the lives of five otherwise-able adults waiting for organs to transplant. Without waylaying this stranger, all five transplant patients will die. Do you murder the stranger?

    The whole point of the trolley question is not how to find a third option, but to consider at what point is it too immoral to actually change the circumstances, even when the outcome is a net positive (in this case more lives are saved). And yes, we're indulging perfect information, in which we know in advance the outcome of taking action, or not taking action.

    The problem with applying the trolley problem (or the ticking time-bomb problem) to the real world at all, is that we seldom have perfect information. We can torture the wrong guy. We can find out that the people we spared were going to die anyway. We can find out the person we killed would have lived to save other lives. It's not applicable to the real world.

    link to this | view in thread ]

  66. identicon
    Anonymous Coward, 2 Jul 2016 @ 9:04pm

    Re: Re: Re: Re: Re: Re: Re:

    Driving on a highway is nothing at all like flying an airplane, auto pilot or not, analogies comparing the two are quite silly.

    link to this | view in thread ]

  67. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:18pm

    Re: Re: Re: Re: Damn Easy!

    Try getting your mind out of the gutter mr "predictable". I think it was clear I was not referring to physical masturbation.

    link to this | view in thread ]

  68. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:32pm

    Re: Re: Damn Easy!

    More mental masturbation! You are completely off topic!

    Get OVER yourself fruitloop!

    Will you save your child or the bus full of other children? That is the question, talking about shit not associated with the problem is for children and people with small minds!

    link to this | view in thread ]

  69. identicon
    Anonymous Coward, 2 Jul 2016 @ 10:37pm

    Re: People are Selfish

    especially in the US

    Not really, this is the world over, it's a part of the human condition the same as selfishness.

    the government as it currently is would never allow this chance to go by without trying to compromise a system like this and use it for spying or murdering unwanted citizens.

    link to this | view in thread ]

  70. identicon
    Anonymous Coward, 2 Jul 2016 @ 11:54pm

    Re: Re: Re: Damn Easy!

    Lol dude, it's satire.
    He's riffing on the idea that people will say:
    "Think of the children!" (aka: your children)
    as a justification towards their decision to put their lives over others.

    That being said if someone put the blame of the death on the manufacturers do you think they would switch right quick towards a "greater good" model or you think they would say "pffft, the increase in people buying cars for their own safety will allow us to avoid the drawbacks of greater damage."

    link to this | view in thread ]

  71. identicon
    Anonymous Coward, 3 Jul 2016 @ 2:33am

    Re: I hate this dumb question

    I liked this response about how humans are looked at in accountablity. You're very correct on this point. In a problem similar to the trolley problem, doing things like breaking a traffic law to potentially save someones life is actually never considered as a factor, despite being eseentially a 'greater good' problem

    If a driver was faced with a choice of either veering into empty oncoming lanes or striking another vehicle crossing the road at 50mph that had failed to yeild, in fact the law would penalize the driver for choosing to cross into the opposite facing lanes, while at the same time it would not penalize the driver for striking the vehicle that failed to yeild - in fact potentially the driver could come out ahead for doing so (depending on potential insurance/lawsuits).

    This is actually one of those subjects that gets asked a lot in denfensive driving courses, and typically the answer given is simply 'given you could be injured in the crash, odds are you're better off swerving and just accepting you got a ticket for doing the right thing'.

    But suddenly, when the 'choice' is taken away from us and made by someone else, everyone gets all up in arms and emotional about it. These kinds of decisions ARE made by real people - Im intimately aware of it as my own mother got a serious fine for just such a situation as Ive described. I cant fathom at all why people don't recognize how important it is to have this kind of greater good decision making built into an autonomous vehicle - and how people don't recognize this is just 'moral panics as usual' at work to boot.

    link to this | view in thread ]

  72. identicon
    Anonymous Coward, 3 Jul 2016 @ 6:40am

    Re: Re: I hate this dumb question

    You make a good point about humans avoiding accidents in your last paragraph, but this is moving the goalpost somewhat, because the question is never "should the autonomous car break the law momentarily to avoid a potentially severe accident while creating no further hazard to anyone else". The problem is never about an autonomous car swerving into empty opposing lanes to avoid another car that has crossed the lines. The answer to that is "maybe, depending on how capable autonomous cars are, but probably not, at least for now".

    The dumb question at hand is about somehow making a computer chose which accident to avoid and which accident to deliberately drive itself and passengers into, and then tacks the calculus of death onto that for good measure. This is a dumb question because even if we do decide that autonomous cars are capable enough to allow law breaking as an emergency escape option, as soon as the escape option ceases to be entirely without consequence it will be discarded for the default option of "keep the lane, brake as hard as possible". Computers will never, ever be required to make a decision about who to hit and who to avoid, or whose life is more valuable. If there is no clearly better option (assuming we even allow them the option) they will be programmed to follow the law.

    Aside: assuming that we do allow an autonomous car to break the law, I would expect that a requirement for doing that would be for the car to rat itself out immediately and upload all sensor data from around the time of the incident. This would be a powerful tool in redesigning streets to make accidents less likely to begin with. If the same near-accident happens over and over in the same place it might indicate a problem with the road design.

    link to this | view in thread ]

  73. identicon
    Anonymous Coward, 3 Jul 2016 @ 7:11am

    Re: Re: Re: Re: Damn Easy!

    Does it occur to you that a person may have more than one child and may have to choose between them?

    This is satire?

    He also started changing the subject in a way that does not fit into the problem. We are talking about self driving cars, not boats or situations where the decision is between two of your own children.

    I am directly saying that people expanding the problem, or changing it need to shut up instead and just directly answer the question. Playing the what if game never ends, which is why any AI we create needs to only do 1 things. Protect the occupant above all other things and then avoiding collateral damage as a secondary objective.

    There is not enough processing power on the planet to do much more and no... we cannot think of the children because it is a pointless effort. Yes, I realize many stupid people will attempt this anyways because they lack both knowledge and wisdom and instead want to play around with a bunch of what ifs like its a silly game. We are literally discussing life and death here and it shows just how immature humanity is through a lot of these posts.

    link to this | view in thread ]

  74. identicon
    Anonymous Coward, 3 Jul 2016 @ 7:21am

    Re: Re: Re: Re: Re: Re:

    having manual over ride is fine (where there is no danger), but I am not sure it is the most reasonable option for the following reason.

    An AI would be able to monitor the physics occurring during an accident at a rate impossible for humans that has not been extensively trained.

    And this is all of course assuming we have developed an AI capable of out performing a human and I would say NO on that point at this time.

    link to this | view in thread ]

  75. identicon
    Anonymous Coward, 3 Jul 2016 @ 7:26am

    Re: Re: Re: Re: Re: Re: Re: Re:

    I think you missed what he was comparing. He was directly talking about a human becoming bored and inattentive while an AI is doing something for them. At this level, it really does not matter what the subject material is on.. we are just talking about bored and inattentive humans making the additionally analogies additive but extraneous in a way that really does not detract.

    link to this | view in thread ]

  76. identicon
    Anonymous Coward, 3 Jul 2016 @ 7:29am

    Re: Re: Re: The first time I saw that "trolley problem"…

    No, dodging questions are the mark of a juvenile mind. You see it in politics all the time.

    If you cannot directly answer a question, you make it clear you are either preparing to deceive or are already participating in a deception!

    Sadly far too many people have not developed the maturity to identify these things. You just made yourself an example of that immaturity.

    link to this | view in thread ]

  77. icon
    OldMugwump (profile), 3 Jul 2016 @ 7:32am

    Re: Re: I hate this dumb question

    I don't know why your mother got fined, but in my experience the authorities are very reasonable about this sort of thing.

    If you break a traffic law in order to avoid an accident, you're extremely unlikely to be penalized. 99% of cops will not issue a ticket in that circumstance (although they could), and 90% of courts will waive the penalty if you explain.

    My own wife got out of a speeding ticket by explaining to the court why it was unsafe to stay near a weaving driver.

    link to this | view in thread ]

  78. identicon
    Anonymous Coward, 3 Jul 2016 @ 7:34am

    Re: Re: Re: Re: I hate this dumb question

    The traffic laws have been tweaked for over 100 years - they're pretty good.

    Okay, agree with everything else you said except this. Our traffic laws are bunk, many of them are used as corruption bait for officials and revenue producing cops.

    link to this | view in thread ]

  79. identicon
    Anonymous Coward, 3 Jul 2016 @ 7:38am

    Re: One thing about the trolly problem...

    Armed toddlers? how does that come into this equation? We are talking about AI and vehicles. This is not something that must give way to each other.

    If you are going to expand on the topic at least keep it in the same universe?

    link to this | view in thread ]

  80. icon
    Monday (profile), 3 Jul 2016 @ 9:38am

    Re: I wanna keep reading before I rock this boat...

    I Have Thoroughly Enjoyed Every Comment...


    Herbert Spencer coined the phrase, “Survival of the Fittest". It is generally accepted as an argumentative Fallacy used in Philosophical circle jerks, while at the same time, survivability has been attributed more and more to a general natural law, wherein that natural law is biologically designed for an entire Species, and not just the one or few mutations that enjoy a capitalist; elitist advantage, and where the survival is directed, or based more or less to The Greater Good of that species. Question; Are Brazillian Army Ants more deserving or better equipped to survive as opposed to your Red Ants, and Black Ants for that matter, we find farming Aphidoidea for their Honey Dew, and protecting them on the plants they eat, storing some in nests for winter, and existing in Ant mutualism. These are two so very different circumstances, but if you were to put these two species models opposite each other, one is certainly going to win that survivability question. The big factor is, what do we put where? Do we bring the Army Ants north? Or do we send our friendly Ants south?

    Consider, please, the use of an Atomic weapon at the end of the war in the Pacific, against the Japanese in 1945. Was the response used, justification for the Greater Good of the Species as a whole, or a quick end to a difficult situation? Does the survival of millions outweigh the survival of an initial 140,000 deaths in few seconds of Sun-like wrath on Earth? President Harry S. Truman, warned by some of his advisers that any attempt to invade Japan would result in horrific American casualties, ordered that the new weapon be used to bring the war to a speedy end. Millions survived at the cost of well over one hundred thousand lives, but it was an acceptable, Human-made decision. There was the greater good to consider. These bombings' in Japan, and their ethical justification are still debated.
    You are saying to yourself, “Is this guy high? Hung-over? Extreme? No, I am trying to be as objective of the machine that is going to be using the equations in its decision to run you down in the intersection, if you’re waiting for a bus. There are Survival Models (PDF), thirty-eight pages of generally accepted survival models that could be used in determining your fate; my fate; a child’s fate. This new Electronic Brain will not have the luxury of months of consultations with Cabinet Members, friends, or opinions from millions. For a nanosecond, it will be entirely on its own – without your input, and opinion on your net value to the Species.
    Now, let us consider what is, or constitutes the greater good. Is the greater good the number of people who do not die because of a human interpretation of the greater good? Or do we shunt the human factor in the decision making and what has the greater value in the decision that we do not want, or wish to make? Should we be, or could we allow ourselves to fuck with the settings of survivability? People, hundreds of people die every day in their autos, and it is always sad, always permanent and always Human Error… take that human out of the list of variables, now it’s the “car’s fault”.

    FACT: It is going to be a human decision no matter the method used to make a nanosecond response. In the end, every single loss-of-life accident will come down to one of dozens of laws that are, in truth, human equations used for the electronic brain in determining what survives, and I say what because this decision will be made by a machine, so do not anthropomorphize an electronic brain. What survives will be a function for what does not survive. I do not believe we are capable of sacrifice for the survival of five, ten, or fifty people. No one wants to die – it is programmed into us. It took three and a half billion years, but it is programmed into us all. There are stellar examples of sacrifice at a human level, but one could effectively argue that these individuals are broken in the sense that they are forfeiting their life for the lives of others – their instinct is faulty. It is a strong solid, good argument, but what if… just what if this is actually the Default Setting for a species’ surviving? The needs of the many, outweigh the needs of the few, or the one in this case? Is your life and occupation more important than the five, ten, or twenty people standing at the bus shelter? The computer won’t be thinking of your occupation, or if you are a provider, it is a numbers game. This isn’t Minority Report, and the computer doesn’t proselytize on its action, or pre-determine if one or several of those people waiting for their Trolley are suicidal, homicidal, sociopaths or psychopaths, it will only look at the numbers, and the options. Yes, now we get the options. It doesn’t care if you’re involved in the cure for cancer, or even if you are one of the Mathematicians, or Statisticians who programmed its electronic brain. Does the electronic-machine choose the Bus Stop, or does it drive you into the cement base of a light standard. You have a chance of survival, but the people at the bus stop are guaranteed their survival. The brain will be set automatically and set without all the baggage that is human existence.

    It is said that we can react to a situation in two-tenths of a second, but it is my honest opinion, and belief that we operate quite literally at the speed of light. We factor almost everything we have acquired in our existence into a life or death decision, or a decision that will affect any one of us for the rest of our life. Somebody once threw a ball at my head; he was six meters from me. I know for a fact that I moved my head out of the way of that ball, which I saw in my peripheral vision, in less than two-tenths of a second – absolutely everything, every variable was calculated and executed instantly – at the speed of light in a human body, nerve impulses and all. Maybe one-tenth of a second, because that ball, doing eighty kilometers an hour, missed me by three centimeters – an inch, at least. It was funny, but if it had connected, a chain of events would have been set in motion that would have quite literally, altered both our lives, for the rest of our lives… that part I cannot explain to you. It is just the way it is.

    We generally make decisions that are the most utilitarian to ourselves, then we consider our significant others, and more often than not, it affects others in profound ways. I have learned this from personal experience. Who do programmers even consult with? Do they analyze the Ted Bundy’s of the automobile industry, or the Asians, or do they take a Woodstock point of view when gathering data for their variables? I would much rather have an electronic-brain make a choice – objective, neutral, considering “at hand” and “to hand” logically; unbiased decision making. But, this point is moot. We have not reached a level of technology where every decision made by an electronic brain will be one hundred percent the best possible outcome for all parties involved. We are only at the “Lesser of two evils” stage in programming, and yet we can’t get past the Greater Good debate.

    Honestly, I think we would need quantum level computing capacity the size of a matchbox before we could ever begin to end an argument like this.

    The Tesla S did the best it could, given the technology, and every model autonomous car will fail, resulting in a death, or deaths, simply because of technology.

    link to this | view in thread ]

  81. icon
    art guerrilla (profile), 3 Jul 2016 @ 10:14am

    Re:

    BUT, what about: well, if our robot-driving overlords are so fucking smart, what are the speed limits for ?
    let weak, pathetic, slow hu-mans go the hu-man speed limit; robot speed limit is 50 MPH more ! ! !

    link to this | view in thread ]

  82. identicon
    Anonymous Coward, 3 Jul 2016 @ 11:34am

    Re: Re: Re: Re: Re: I hate this dumb question

    Agreed

    I also agree that automated vehicles shouldn't be held to a higher standard than a normal ordinary human. Requiring automated vehicles to consider these dilemmas is holding them up to a higher standard which is not acceptable since most people that drive don't consider such things.

    However that's not to say that these possible dilemmas shouldn't be considered. There should be no legal requirement for automated vehicles to consider them, just like humans aren't required to consider them on a drivers test, but optionally considering them in our discussion of how we think they should make decisions is OK.

    link to this | view in thread ]

  83. icon
    Uriel-238 (profile), 3 Jul 2016 @ 11:56am

    Re: Re: One thing about the trolly problem...

    In a recent year it was revealed a few children of four or five years old armed with a baby's first rifle single shot .22 managed to kill a parent or in one case, himself. In the same year, we have even fewer deaths by terrorism.

    Car fatalities in the human world are looked at in numbers per capita, because there's an awful lot of them, and when we create self-driving vehicles, I assume that they'll still be numerous, only fewer than they are with human drivers, and not few enough that we can list all the incidents on Wikipedia.

    But when it comes to incidents in which a car could have saved more or different people by behaving differently, I suspect those scenarios will be few enough to qualify for a Wikipedia list.

    When it comes to programming self-driving cars, the question is a matter of diminishing returns. At what point does the additional code to accommodate specific situations cease to prevent accidents or save lives? That is what is going to determine what automated cars will do.

    link to this | view in thread ]

  84. identicon
    Anonymous Coward, 3 Jul 2016 @ 4:11pm

    Re: Re: Re: I hate this dumb question

    "even if we do decide that autonomous cars are capable enough to allow law breaking as an emergency escape option"

    The thing is most rules or laws have exceptions or 'superseding rules' under the right circumstances.

    For instance a very high ranking superseding rule generally is don't get into an accident. So if you had to break other laws to avoid an accident then, technically speaking, you aren't breaking the law because the exception to those other laws is that you can break them without breaking the law if it's necessary to avoid an accident.

    It's like a law that says no crossing a double yellow line except into a driveway. Another exception to that generally is if you need to do it to avoid an accident but, most of the time, that goes without being said because it's implied. After all the purpose of the law is so that we can drive around safely and avoid accidents so if you are in a situation where you need to break less important laws to follow the more important law of not getting into an accident you aren't breaking any laws.

    The issue here is that drivers school and the law don't necessarily deal with all of these moral dilemmas. and it's not really for the law to necessarily regulate morality which is why the law doesn't deal with them that extensively.

    link to this | view in thread ]

  85. identicon
    Anonymous Coward, 3 Jul 2016 @ 4:29pm

    Re: Re: Re: Re: I hate this dumb question

    and to expand on this I'm sure just about everyone here has heard some variation of the trolley dilemma. These types of dilemmas are nothing new.

    https://en.wikipedia.org/wiki/Trolley_problem


    Your job is to operate the lever to ensure that each train heads in the right direction when you suddenly and unexpectedly find yourself in this dilemma. What should you do?

    In this situation the automated car is analogous to the lever operator that found itself in a moral dilemma. To answer the question of whether or not the automated car should be legally required to consider morality ahead of time lets consider the standards that would be placed on the lever operator. When he applied for the job did the law require him to first pass a test on such a potential moral dilemma before becoming a lever coordinator? If not then why should the automated vehicle be held to a higher standard?

    link to this | view in thread ]

  86. icon
    Ninja (profile), 4 Jul 2016 @ 6:21am

    I still think they should be programmed to take the path with less overall damage to humans involved or, in case they cannot talk to other cars they should prioritize self-preservation. This is specially true if self-driving cars are going to co-exist with human driven ones even if it is for a short while.

    However there is one issue that may actually make the 'self-preserving' car the choice to go regardless of what we think: we've seen people programming software with racial/social bias (ie: the code has prejudice embedded by its makers). Considering this, interconnection to hell, when things go wrong each car should try to figure a way to preserve itself.

    link to this | view in thread ]

  87. identicon
    Anonymous Coward, 4 Jul 2016 @ 8:59am

    Re: Re: Re: Re: Re: Damn Easy!

    "people ... need to shut up"

    and the people's response is, "No - wat ya gonna do bout it?"

    link to this | view in thread ]

  88. identicon
    Anonymous Coward, 4 Jul 2016 @ 9:01am

    Re: Re: Re: Re:

    "If the logic is anything other than protect occupant over all other variables then it is open to corruption"

    Everything is corrupt, deal with it.

    link to this | view in thread ]

  89. identicon
    Anonymous Coward, 4 Jul 2016 @ 9:12am

    Re: Re: Re: Re: Re: Re: Re: Re: Re:

    Being bored and inattentive is a human trait which is completely separate from and unrelated to the programming of ethical behaviors within an artificial intelligence platform.

    People become bored and inattentive while driving their antique non AI vehicles everyday. They put on quite the display during their daily commute, many are texting, chatting on phone, eating, reading, applying makeup, shaving, or other such things other than paying attention to their primary function - driving the vehicle. Their lack of attention usually while tailgating leads to wrecks costing everyone more for insurance and making the commute very slow.

    link to this | view in thread ]

  90. icon
    Mason Wheeler (profile), 4 Jul 2016 @ 9:33am

    Re: "the utilitarian model is not the most moral [among models of morality]"

    I wasn't talking about "models of morality" in general, but of models for this specific issue.

    The most moral thing for an autonomous vehicle manufacturer to do in this situation is to design the car to always make protecting the inhabitants of the car its highest priority. Creating a way for the car to do otherwise is creating a way for a malicious actor to activate that code and kill people with it, and as numerous IoT security issues have shown us, that's a hacking and computer security is a very real concern.

    The "trolley problem", by contrast... well, there's a reason it's known as a thought experiment, rather than a case study.

    link to this | view in thread ]

  91. icon
    OldMugwump (profile), 4 Jul 2016 @ 10:59am

    Re: Re: Re: Re: Re: I hate this dumb question

    I'm not saying it's impossible to safely break the traffic rules. Obviously in many cases it is.

    I'm saying if everyone follows the existing rules, crashes are astronomically unlikely.

    link to this | view in thread ]

  92. identicon
    Anonymous Coward, 4 Jul 2016 @ 11:47am

    Re: Solution

    link to this | view in thread ]

  93. identicon
    wayout, 4 Jul 2016 @ 2:42pm

    The attitude is no different than that of those who advocate the killing of the lower masses for the betterment of society as a whole (population control). You notice that they never volunteer themselves to go first, its always the "other guy".

    link to this | view in thread ]

  94. identicon
    costoverrun, 4 Jul 2016 @ 5:43pm

    Re:

    link to this | view in thread ]

  95. identicon
    Stephen, 4 Jul 2016 @ 5:56pm

    Re: And then...

    What happens when the family car decides it must kill the young family of 5 rather than a group of 6 near-death seniors...
    Fair point. But why stop there? Sooner or later there are going to be self-driving buses on the roads. What happens if one gets in trouble and it must decide whether to kill the 60 school children it is carrying or a mother and her toddler crossing the road?

    Which raises a further question. Will self-driving vehicles always choose to kill those they are carrying or will they be allowed to "merely" kill the fewest number of people? Of course to do that they will need to know (a) how many humans they are carrying, and (b) how any people they are about to run down.

    And who do aggrieved families sue for compensation? The insurance company for the vehicle's owner presumably, but of course that assumes that such people will BE insured once self-driving cars and buses come along. Can an owner of a vehicle be held liable if they themselves weren't actually driving the vehicle themselves? (If memory serves they are usually not liable at the moment. Instead the person who actually drove the vehicle is the liable one. Will it become possible in the future to sue the AI or other piece of software which actually drove the vehicle?)

    What about the software company which wrote the software which made the decision to kill those people? Can they be held liable for those deaths? Or will the law grant them immunity, much as the law grants immunity to doctors who carry out abortions and soldiers who kill enemies during wartime?

    link to this | view in thread ]

  96. icon
    Mar Paulus VII (profile), 4 Jul 2016 @ 6:45pm

    Re: And then...

    You kidding me, is that what Sister Hillary Clinton said, "What difference does it make?"

    link to this | view in thread ]

  97. identicon
    Anonymous Coward, 5 Jul 2016 @ 4:49am

    Re: Damn Easy!

    This puts into perspective what God did by allowing the sacrificing Jesus for the rest of us.

    link to this | view in thread ]

  98. identicon
    Anonymous Coward, 5 Jul 2016 @ 5:00am

    Re: Re: Damn Easy!

    They had cars back then?

    link to this | view in thread ]

  99. identicon
    I.T. Guy, 5 Jul 2016 @ 6:28am

    "It's not an easy question to answer"

    Sorry but wrong. I'm saving my kid every time. No doubt at all.

    link to this | view in thread ]

  100. identicon
    I.T. Guy, 5 Jul 2016 @ 6:32am

    Re: Re: Re: Re: And then...

    Like a modern day Christine. Too bad cars aren't as sexy as a 57 Plymouth Fury anymore.

    link to this | view in thread ]

  101. icon
    JBDragon (profile), 5 Jul 2016 @ 7:22am

    Knight Rider covered this very topic. The whole KITT and KARR episodes. Karr was all about self preservation and of course he was Evil. They changed that programming when they built Kitt. Those were self driving cars. (Well not really) but you get the point.

    Really though, with everyone having self driving cars, the roads should be safer as long as you keep humans out of the mix throwing a wrench into things.

    link to this | view in thread ]

  102. icon
    OldMugwump (profile), 5 Jul 2016 @ 7:43am

    Re: Re: All you anti-scientific philosophers

    Your joke-detection circuit needs adjusting.

    link to this | view in thread ]

  103. identicon
    wayout, 5 Jul 2016 @ 9:27am

    Re:

    "Really though, with everyone having self driving cars, the roads should be safer as long as you keep humans out of the mix throwing a wrench into things."

    And who exactly is doing the programming for these things....those very same "humans" that you want out of the mix. And how do we account for inherent bias in the code...?

    link to this | view in thread ]

  104. icon
    Uriel-238 (profile), 5 Jul 2016 @ 11:08am

    Re: All you anti-scientific philosophers

    They tragedy the fuck out of the commons.

    Every single time.

    As I.T. Guy put it Sorry but wrong. I'm saving my kid every time. No doubt at all.

    This isn't to say that self-driving cars are doomed to become murder machines. It's very possible that self-preservation programming works, or that programming a car never comes down to selection algorithms.

    But when choosing between preservation of the whole community, or benefiting the self (including the family) at the expense of the community, the naked ape has a really bad habit of choosing the latter. Every time.

    link to this | view in thread ]

  105. icon
    OldMugwump (profile), 5 Jul 2016 @ 11:33am

    Re: Re: All you anti-scientific philosophers

    Happily for the rest of us, IT Guy is unlikely to be programming his own self-driving car.

    I find it amusing that people are saying things like "No doubt at all" and "No question" when it comes to prioritizing their own child vs. a busload (or city full, or world full) of other people's kids.

    The phrase really shows it's an emotional statement, not a reasoned one. "No question" means, literally, that the speaker hasn't thought it about it. Just is reacting emotionally.

    link to this | view in thread ]

  106. icon
    OldMugwump (profile), 5 Jul 2016 @ 11:37am

    Re: Re: robot speed limit is 50 MPH more ! ! !

    Sure.

    Speed limits are to tell fallible humans how fast is "too fast".

    Most drivers don't need speed limits at all - one standard way of setting them is to use the 80th percentile speed drivers choose when the road has no marked limit.

    It's the < 20% of human drivers that are nuts (drunk, teenagers with hormone poisoning, etc.) who need the speed limit signs.

    Self-driving cars shouldn't need *any* speed limits. They should be able to figure out, for themselves, how fast they can safely go.

    link to this | view in thread ]

  107. identicon
    Anonymous Coward, 14 Jul 2016 @ 2:57pm

    Re: And then...

    How does the self driving car know how many passengers are on the school bus?

    link to this | view in thread ]

  108. identicon
    Anonymous Coward, 25 Aug 2016 @ 7:03am

    Re: Re: And then...

    It's a self-driving car. Counting its passengers isn't exactly the biggest engineering challenge involved.

    link to this | view in thread ]

  109. identicon
    Heraclio Munoz, 19 Dec 2017 @ 3:04pm

    5 Industries That Artificial Intelligence and Machine Learning Are Transforming

    Every year around 40,000 people lose their lives in North America in road traffic collisions, 37,000 in the US alone. Most of these accidents are caused due to drunk, fatigued, cell-phone distracted driving and poor driver behavior. All human factors. Bringing Artificial Intelligence into play is aimed to take out all those factors and turn a completely human dependent car into just another automated machine that is able to “think” and “decide” on its own based on artificial intelligence. Driverless vehicle is one big example of what Artificial Intelligence is doing in the Mobility or Transportation sector. We are definitely in the transitional phase where intelligent vehicles equipped with multiple cameras and sensors are being tested to drive on their own under human monitoring or vice versa. https://www.lanner-america.com/blog/5-industries-artificial-intelligence-machine-learning-transformi ng/

    link to this | view in thread ]

  110. identicon
    Jason White, 18 Jul 2018 @ 5:38pm

    Are Autonomous Cars Safe?

    Autonomous cars are currently being trialed all over the world and their eventual widespread implementation could revolutionize not only the transport industry, but the way we travel in general. However, recent high-profile accidents involving autonomous vehicles have sparked debates as to how safe driver-less cars really are. Uber has recently put their autonomous vehicle trials on hold after a fatal accident in the US, while Google has put safety drivers in their driver-less cars to ensure that someone is able to take control should things become unsafe. Are Autonomous Cars Safe? https://www.lanner-america.com/blog/autonomous-cars-safe/

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.