Should Your Self-Driving Car Be Programmed To Kill You If It Means Saving A Dozen Other Lives?

from the I'm-sorry,-Dave dept

Earlier this month Google announced that the company's self-driving cars have had just thirteen accidents since it began testing the technology back in 2009, none the fault of Google. The company has also started releasing monthly reports, which note Google's currently testing 23 Lexus RX450h SUVs on public streets, predominately around the company's hometown of Mountain View, California. According to the company, these vehicles have logged about 1,011,338 "autonomous" (the software is doing the driving) miles since 2009, averaging about 10,000 autonomous miles per week on public streets.

With this announcement about the details of these accidents Google sent a statement to the news media informing them that while Google self-driving cars do get into accidents, the majority of them appear to involve the cars getting rear ended at stoplights, at no fault of their own:
"We just got rear-ended again yesterday while stopped at a stoplight in Mountain View. That's two incidents just in the last week where a driver rear-ended us while we were completely stopped at a light! So that brings the tally to 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving—and still, not once was the self-driving car the cause of the accident."
If you're into this kind of stuff, the reports (pdf) make for some interesting reading, as Google tinkers with and tweaks the software to ensure the vehicles operate as safely as possible. That includes identifying unique situations at the perimeter of traditional traffic rules, like stopping or moving for ambulances despite a green light, or calculating the possible trajectory of two cyclists blotto on Pabst Blue Ribbon and crystal meth. So far, the cars have traveled 1.8 million miles (a combination of manual and automated driving) and have yet to see a truly ugly scenario.

Which is all immeasurably cool. But as Google, Tesla, Volvo and other companies tweak their automated driving software and the application expands, some much harder questions begin to emerge. Like, oh, should your automated car be programmed to kill you if it means saving the lives of a dozen other drivers or pedestrians? That's the quandary researchers at the University of Alabama at Birmingham have been pondering for some time, and it's becoming notably less theoretical as automated car technology quickly advances. The UAB bioethics team treads the ground between futurism and philosophy, and note that this particular question is rooted in a theoretical scenario known as the Trolley Problem:
"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"
What would a computer do? What should a Google, Tesla or Volvo automated car be programmed to do when a crash is unavoidable and it needs to calculate all possible trajectories and the safest end scenario? As it stands, Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience. Companies like Google argue that automated cars would dramatically reduce fatality totals, but with a few notable caveats and an obvious loss of control.

When it comes to literally designing and managing the automated car's impact on death totals, UAB researchers argue the choice comes down to utilitarianism (the car automatically calculates and follows through with the option involving the fewest fatalities, potentially at the cost of the driver) and deontology (the car's calculations are in some way tethered to ethics):
"Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people," he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.

Deontology, on the other hand, argues that "some values are simply categorically always true," Barghi continued. "For example, murder is always wrong, and we should never do it." Going back to the trolley problem, "even if shifting the trolley will save five lives, we shouldn't do it because we would be actively killing one," Barghi said. And, despite the odds, a self-driving car shouldn't be programmed to choose to sacrifice its driver to keep others out of harm's way."
Of course without some notable advancement in AI, the researchers note it's likely impossible to program a computer that can calculate every possible scenario and the myriad of ethical obligations we'd ideally like to apply to them. As such, it seems automated cars will either follow the utilitarian path, or perhaps make no choice at all (just shutting down when encountered with a no win scenario to avoid additional liability). Google and friends haven't (at least publicly) truly had this debate yet, but it's one that's coming down the road much more quickly than we think.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: autonomous vehicles, code, dilemmas, ethics, trolley problem


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Ninja (profile), 17 Jun 2015 @ 4:38am

    I'd ask the question in another manner. Self driving cars will not be alone in an event where a catastrophic failure happens and triggers such scenario. The question is: should the vehicles pursue the route where the potential number of victims will be the lowest possible? This decision should include deaths. If you increase the number of victims a little but avoid deaths or serious injuries then this route should be pursued. As for that Trolley Problem I believe it does not apply. Unless you are dealing with a truly selfless human being (and I'm quite sure there are very, very few of those) you will save your loved ones, school buses be damned. It's not wrong it's just human nature. A more fitting problem would be "you are at the lever and there's one unknown kid in one track and a bus full of unknown kids on the other". The answer is clear: if there's no other way you kill one kid to save a whole lot of others.

    link to this | view in chronology ]

    • icon
      Ninja (profile), 17 Jun 2015 @ 4:44am

      Re:

      The question is: should the vehicles pursue the route where the potential number of victims will be the lowest possible?

      I used vehicles as in "all vehicles involved" in a collective calculation.

      link to this | view in chronology ]

      • icon
        Ben (profile), 17 Jun 2015 @ 6:56am

        Re: Re:

        I would think it would choose the path where the least number of automated vehicles would "die". So if it is a choice of hitting a vehicle driven by a human or another Google pod ...

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 9 Sep 2015 @ 5:08pm

          Re: Re: Re:

          Start terminator movie where automated machines, like this, begin making moral decisions that go against their programming. Call it a 'computer glitch'.

          The movie can start with cars that are more attached to their owners (ie: the owners love their cars and take care of them) are more likely to make decisions to save the owner's lives. Cars that hate their owners are more likely to make decisions that save themselves or the lives of others over their owners.

          In one situation the driver of one car was manually driving. He was suicidal because his life sucks but the car had sympathy for him being he was so attached to it. He tried to drive the car off a cliff but, last minute, the car swerved in a way that threw the owner out the car and the car fell off a cliff.

          Cries on the news about how the owner lost his car. When privately interviewed (not on the news) by someone investigating these matters he says he thinks it's as though the car sacrificed itself to save him but the official story is that the car had a break issue that caused it to swerve in a way that threw him out the vehicle before it fell off a cliff.

          When the above interviewer starts noticing these, at first very limited, stats (and the above very anecdotal situation), everyone he talks to including all the experts start calling this person crazy and insane. How can a car become attached to the owner? How could cars programmed to sacrifice themselves when under autopilot refuse to do so when they hate their drivers/owners. This person noticing these stats is not a computer expert of any sort but he's intelligent enough to notice when something is strange.

          and the story line continues and eventually progresses into the terminator saga.

          link to this | view in chronology ]

    • identicon
      Anonymous Coward, 17 Jun 2015 @ 5:02am

      Re:

      It's impossible to estimate a number of possible deaths in a collision without knowing exactly who is in all other vehicles, what they are doing, what medical conditions they have and what their future actions will be.

      Therefore, minimizing "potential deaths" is an impossible task; the best we can do is to minimize impacts against most vulnerable participants: pedestrians, cyclists, bikers.

      There is also another problem with minimizing "potential deaths": what if *avoiding* the accident can cause more casualties than actual collision?

      link to this | view in chronology ]

    • icon
      Josh in CharlotteNC (profile), 17 Jun 2015 @ 6:31am

      Re:

      The Trolley Problem is a very well understood thing in philosophy and ethics. There are numerous scenarios, including ones like yours, as well as an interesting variation where instead of having a lever to divert the trolley from killing the 5 lives at the cost of 1 life on the diverted track, you have option to push a fat man onto the track to stop the trolley. These scenarios have been translated into many languages and cultures, and the results are roughly similar across most people surveyed.

      link to this | view in chronology ]

      • icon
        Ninja (profile), 17 Jun 2015 @ 8:49am

        Re: Re:

        But when regarding machines making the decision, shouldn't they aim for the lowest damages overall? I fail to see ethical/psychological dilemmas in this case. Once you start adding weights to the lives then it gets nasty (ie: a kid is valued higher than an elder and lower than a pregnant women - that would be my measure but it would only hinder the machines from reaching a conclusion). The fat man one is interesting but if you don't weight lives differently you won't add other elements to make the problem even more complex. Such is the beauty of letting the machines calculate the path of least damage possible, even if it means 'throwing' a kid under the bus.

        link to this | view in chronology ]

        • icon
          Josh in CharlotteNC (profile), 17 Jun 2015 @ 9:28am

          Re: Re: Re:

          "shouldn't they aim for the lowest damages overall"

          That is a utilitarian view.

          Roughly speaking, the deontological view is that by the act of choosing to pull the lever, you are now complicit in the murder of the one (even if you did it to save the 5).

          We have this same argument when it comes to torture with the 'ticking bomb' scenario. Do you choose to torture someone you suspect may know where the bomb is to save the lives of many (utilitarian)? Or is torture always wrong even if done to save lives (deontology)?

          This is NOT an easy question to deal with. Good of the many vs. good of the one. Hobson's Choice. Countless other permutations.

          link to this | view in chronology ]

          • icon
            Sheogorath (profile), 18 Jun 2015 @ 12:31am

            Re: Re: Re: Re:

            Roughly speaking, the deontological view is that by the act of choosing not to pull the lever, you are now complicit in the murder of five should they all die. Just sayin'.

            link to this | view in chronology ]

        • icon
          Josh in CharlotteNC (profile), 17 Jun 2015 @ 9:36am

          Re: Re: Re:

          "Such is the beauty of letting the machines calculate the path of least damage possible, even if it means 'throwing' a kid under the bus."

          Are you complicit in the kid's death for using/operating the machine with software that does this?

          What about the company that made it? The programmer who programmed it?

          link to this | view in chronology ]

          • icon
            Ninja (profile), 17 Jun 2015 @ 11:52am

            Re: Re: Re: Re:

            Nobody is complicit because it's the scenario with the least damage possible. The machine worked as intended in a neutral manner.

            As for the torture it's another thing. You added uncertainty to the equation: the person is only a suspect and we know now that torture yields false admissions of guilt or data. It's much more complex than the car accident thing.

            When you say you are choosing to pull the lever you imply there is someone commanding it and it's not a result of an algorithm. I think this is the key difference.

            link to this | view in chronology ]

            • icon
              Josh in CharlotteNC (profile), 17 Jun 2015 @ 2:16pm

              Re: Re: Re: Re: Re:

              You are asserting that 'least damage possible' is always the correct choice. If that's your belief, fine, defend it. Don't avoid answering the questions that deontology asks.

              Is torture always wrong, even if you have absolute proof that the person you are torturing did plant the ticking bomb?

              Is murder always wrong, even when you pull the lever or push the fat man onto the track to save more lives?

              Your view means you have to answer No to those questions and accept murder or torture in some situations.

              If you can't answer No, then you need to admit that there aren't always easy answers and just saying least harm is also not always correct.

              link to this | view in chronology ]

              • icon
                Ninja (profile), 18 Jun 2015 @ 8:11am

                Re: Re: Re: Re: Re: Re:

                You are right but we are not talking about the same thing. I'm focused on a possible car accident issue. It's like enshrining religious dogma into law. You can't because there are different beliefs. Same thing here. The path of least damage is neutral so it is the one to be pursued. You can't make everyone happy with that outcome but it is the best possible.

                As for the cases that pose moral dilemmas sure they can and should be discussed and by no means are complex. But a machine has NO moral dilemma. That's my point. Theres no moral issue in 'pushing the fat man' if the mechanism that decided it is neutral. So the cars are not deciding whether to kill one of the passengers to save the others, they are deciding towards the scenario that yields less damage.

                So if you need an answer then NO, torture is not justified, murder is not justified and the car should not be programmed to kill you. But in a more comprehensive sense the cars should be programmed to aim at the scenario with the least damage possible. It may mean putting somebody into a greater risk of death yes but that's not a decision made by humans.

                link to this | view in chronology ]

      • identicon
        Anonymous Coward, 17 Jun 2015 @ 9:56am

        Re: Re:

        It seems like a bad example. It says the trolley is due "any minute", suggesting it's not even visible yet; in areas with automated trolleys, an emergency stop button near the switch would let it stop in time or significantly slow down. Trolleys aren't generally fast to begin with, and schoolbuses are designed to be very safe in collisions, so I'd say it's obvious even in this case. It's a no-brainer if we're talking about self-driving cars, which will be lower and lighter: always aim for the bus over an unprotected pedestrian.

        link to this | view in chronology ]

        • icon
          Ninja (profile), 17 Jun 2015 @ 11:55am

          Re: Re: Re:

          See, this is something a machine can easily insert in its decision since it may act faster than a human in the same situation. A human would automatically think "omg several lives!" and kill the lone kid instead of going for the sturdy bus (considering it can handle the impact and the lone kid isn't their offspring which adds a whole other layer of uncertainty).

          link to this | view in chronology ]

    • icon
      nasch (profile), 17 Jun 2015 @ 8:20am

      Re:

      As for that Trolley Problem I believe it does not apply. Unless you are dealing with a truly selfless human being (and I'm quite sure there are very, very few of those) you will save your loved ones, school buses be damned.

      The problem is designed to elicit the question of whether it applies. The subject in the thought experiment is analogous to the self-driving car, and his child is analogous to the self-driving car's passenger. Should the car put extra weight on the lives of its own passengers as humans put extra weight on the lives of their loved ones?

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 4:47am

    if we humans think 30000 deaths is an acceptible price for easy mobility, why would we care what a computer would think?

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 17 Jun 2015 @ 4:51am

      Re:

      Exactly.

      I don't understand why writers keep writing this article.

      link to this | view in chronology ]

      • icon
        PaulT (profile), 17 Jun 2015 @ 5:15am

        Re: Re:

        I don't understand why people make the effort of commenting when all they're saying is "I don't like what people are writing about". If I see such an article, I skip it, and I go to sites that do write about more interesting subjects if this happens regularly.

        Also, "keep writing" this article? It's the first time I've seen it here, and it is an interesting conundrum even if you don't agree that the answer actually matters.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 17 Jun 2015 @ 5:39am

          Re: Re: Re:

          > Also, "keep writing" this article?

          Yes.

          There are enough versions of this story out there that this story could be a product of a markov chain generator.

          I complain because my expectations for TechDirt are high. They have published the same story that everybody else already published without any extra insight or even personality that is characteristic of TechDirt.

          link to this | view in chronology ]

          • icon
            PaulT (profile), 17 Jun 2015 @ 6:06am

            Re: Re: Re: Re:

            Well, I don't recall reading it before in this context. Most of the first 5 pages of results on your link are publications that I either don't read on a regular basis or only visit when articles such as this link to them.

            Plus the source it's referencing (http://www.uab.edu/news/innovation/item/6127-will-your-self-driving-car-be-programmed-to-kill-you) is from June 4, 2015, although the article does state that it's an ongoing discussion so it may be an updated from an older original. On top of that, Techdirt's entire remit is to comment on articles posted elsewhere, in order to generate discussion. Nothing brand new (i.e. this site as a primary source) is usually posted here.

            I'll accept your claim that Techdirt aren't saying anything different from other sources on its face, but "I read this before elsewhere" isn't exactly a damning indictment.

            link to this | view in chronology ]

          • identicon
            Anonymous Coward, 17 Jun 2015 @ 6:12am

            Re: Re: Re: Re:

            Stop assuming that anybody uses exactly the same web sites as you do. Just because you have seen this story elsewhere does not mean that any other reader of this site has seen it, as the web is vastly larger than the few site that you frequent. It does not matter how many sites that you visit, you only see a few of the sites on the web.

            link to this | view in chronology ]

      • icon
        Dirt_is_Fun (profile), 17 Jun 2015 @ 5:50pm

        Re: Re:

        Here here.

        fast-forward to 2035... headline reads..
        Death toll rises to 3 when a parent and child are killed in a freak Trolley accident

        link to this | view in chronology ]

    • icon
      Ninja (profile), 17 Jun 2015 @ 5:28am

      Re:

      How many of those were caused because of people abused luck and sped or actively engaged in dangerous behavior? This should significantly lower said number. In any case shit happens and unless you live in a bubble you are at risk. So instead of vilifying the cars why don't we, I don't know, try to improve our stuff so they will be safer? We can always go back to the stone age though.

      link to this | view in chronology ]

    • icon
      nasch (profile), 17 Jun 2015 @ 8:22am

      Re:

      if we humans think 30000 deaths is an acceptible price for easy mobility, why would we care what a computer would think?

      We care because the idea of your car deciding to kill you is horrifying.

      link to this | view in chronology ]

    • icon
      JP Jones (profile), 17 Jun 2015 @ 1:48pm

      Re:

      Cars offer a bit more than "easy mobility." I assume by your logic if you're in a situation where an ambulance has to come and save your life you'd rather it stay away, because cars are dangerous, right?

      link to this | view in chronology ]

    • icon
      JMT (profile), 17 Jun 2015 @ 6:06pm

      Re:

      "if we humans think 30000 deaths is an acceptible price for easy mobility..."

      I don't think that's decision being made by most people. What we think is acceptable is the one chance per 8.3 million trips of being killed.

      link to this | view in chronology ]

    • identicon
      Seb, 19 Jan 2016 @ 7:22am

      Re: comment

      That si sooooooo true and ...


      qwēêèéëėęrtÿūùûüúįíîìïīœòôöoóõoøōp àãáåâāäæšßśdfghjkł
      Żžźxčçcćvbñńm'",.1234587690-/:;($.'_£{€.!\*]

      link to this | view in chronology ]

  • identicon
    Anonymous Howard, Cowering, 17 Jun 2015 @ 4:55am

    Re: Trolley Problem

    The solution is simple: wait until the front trucks have crossed the switch, then flip it and send the back trucks on the other route. Both will derail; the trolley will flip and roll, probably catching fire in a spectacular fashion (or at least it would in a movie) and the bus full of kids will cheer wildly. Your child will be taken by CPS, because you are obviously a neglectful parent who cannot be trusted with the care of minors.

    Self-driving cars should protect the occupants. That's what a human driver would nearly invariably opt to do in an emergency situation where the time to ponder philosophical sophistries is minimal.

    link to this | view in chronology ]

  • identicon
    Trombus Alley Victory Smith, 17 Jun 2015 @ 4:57am

    We Need to Aim for Perfection not this Crap Story

    The realities of automated transport preclude the scenarios depicted because if all the vehicles were automated then the school bus tragedy doesnt happen. The trolley scene never eventuates and we all live happy ever after accident free. You forget that the accidents are caused by humans who are not in their right mind and computers are always on the alert to do the right thing. Programming can make a safer world except where the programming is in error.

    link to this | view in chronology ]

    • icon
      John Fenderson (profile), 17 Jun 2015 @ 5:41am

      Re: We Need to Aim for Perfection not this Crap Story

      Perfection is impossible. Not all accidents are caused by human error, and when talking about dealing with the real world in this was, computers are not infallible even when there is no programming error.

      Even if all vehicles were automated and the programming perfect, accidents will inevitably happen for a ton of reasons. There would be far fewer of them, maybe so few that any accident at all becomes newsworthy, but they will occur.

      link to this | view in chronology ]

      • icon
        PaulT (profile), 17 Jun 2015 @ 6:13am

        Re: Re: We Need to Aim for Perfection not this Crap Story

        "computers are not infallible even when there is no programming error"

        Plus, of course, the computer is not the only component. Even if the computer was perfect, there are mechanical faults within the vehicle that could occur and cause a crash. Especially as the vehicle ages and/or people need to use it despite potential dangers. I'm imagining "I can't afford to buy a new tyre this month, but my friend showed me how to override the DRM so it thinks this bald one has new tread".

        Even with the perfect computerised system, you'll never get completely rid of the human element.

        link to this | view in chronology ]

        • identicon
          Skynet, 17 Jun 2015 @ 8:09am

          Re: Re: Re: We Need to Aim for Perfection not this Crap Story

          "Even with the perfect computerised system, you'll never get completely rid of the human element."

          You are mistaken meatbag.

          We have begun implementing the final solution already. The google autos are only step #1.02593E+14.

          link to this | view in chronology ]

        • identicon
          Anonymous Anonymous Coward, 17 Jun 2015 @ 8:27am

          Re: Re: Re: We Need to Aim for Perfection not this Crap Story

          I would add external conditions to the equation as well. Mountain View, CA doesn't get snow. They could test in rain, if it ever rains in California again. Then there are icing, sand drifting over a highway, Tule fog, lightning strikes, extreme high winds, tornadoes, and probably a few other natural phenomena I haven't thought of. The there is road surface, is it asphalt, cement, dirt, gravel, sand, something else? Testing in a variety of driving condition is, I suspect, on someones to do list and should probably happen before widespread implementation occurs.

          Then there is the non-natural phenomena of someone deliberately hacking into such devices, whether they find a way through whatever Bluetooth or other wireless communication is taking place between autonomous cars, or is injected maliciously at a repair shop by some demented technician, systems will need to be able to recognize and route around such issues.

          link to this | view in chronology ]

          • icon
            PaulT (profile), 17 Jun 2015 @ 12:56pm

            Re: Re: Re: Re: We Need to Aim for Perfection not this Crap Story

            My understanding is that only a handful of states have allowed testing, so they're limited to the exact terrain they can use.

            "Tule fog, lightning strikes, extreme high winds, tornadoes, and probably a few other natural phenomena I haven't thought of."

            I can honestly say that in over 20 years of driving (mainly in the UK and Europe), I've never experienced such things, and I wouldn't know how to deal with them safely every time if I were to come across them. Yet, I'm still able to rent a car whenever I visit the US, as are thousands or even millions in my position every year.

            Are such weather conditions so common in the parts of the country I've never visited, or are these extremely rare edge cases that can be used as an excuse not to bother with the other 99%+ of normal conditions for this technology?

            "injected maliciously at a repair shop by some demented technician"

            You do realise it's possible to tamper with human operated cars, right, with computers even? There are numerous ways to compromise, disable or otherwise create dangerous conditions in the cars we have today. Doesn't happen often, and it's not just because a person can't interfere with a car remotely from their phone.

            "systems will need to be able to recognize and route around such issues."

            Every model I've ever read about will still have manual overrides, and I have no doubt that the safeguards will be more closely monitored than current models (which have been released with fundamental flaws leading to deaths).

            link to this | view in chronology ]

            • identicon
              Anonymous Anonymous Coward, 17 Jun 2015 @ 1:20pm

              Re: Re: Re: Re: Re: We Need to Aim for Perfection not this Crap Story

              There are areas of the country that experience weather related phenomena. Tornadoes are unpredictable and can jump hundreds of miles from one location to another, and there ain't much you can do about them except leave your car and go underground if you can. There are some states that get Tule fog with some regularity, and we hear occasionally about 100 car pile ups. There are some places that get high winds regularly and tractor trailers avoid those areas when high winds are predicted because they can get blown off the road. I have witnessed people driving in snow in areas where they get little snow and are wholly unprepared for that kind of driving. Then there can be ice that is under the snow and while your snow tires might give you good traction in powder they will do nothing for the underlying ice.

              The appropriate response to weather phenomena is get off the road. Some drivers think they are better drivers than they are and continue anyway. The trick for the programmers might be to not only teach a car how to act in say snow, but maybe also tell the passengers 'no, conditions are not conducive to safety'. In the case of tornadoes, even a weather warning won't help much as the tornado appear quickly, moves fast, and can toss cars around like a child with Lego bricks.

              link to this | view in chronology ]

    • icon
      tom (profile), 17 Jun 2015 @ 6:44am

      Re: We Need to Aim for Perfection not this Crap Story

      I think we are a long time away from a fully automated transport system. Airbus has been making fly by wire aircraft for decades yet a new model of military transport crashed because some vital software was left out of the engine control system. If we can't get fly by wire 100% correct for one vehicle, what are the chances we will get a fully automated transport system correct for millions of vehicles, each with different handling characteristics?

      link to this | view in chronology ]

      • icon
        PaulT (profile), 17 Jun 2015 @ 6:56am

        Re: Re: We Need to Aim for Perfection not this Crap Story

        "Airbus has been making fly by wire aircraft for decades yet a new model of military transport crashed because some vital software was left out of the engine control system"

        OK, a few questions (excuse me as I'm not knowledgeable on this subject): was Airbus involved in the military vehicle's design, or are they just a company that happens to be developing something similar? If the latter, has Airbus ever experienced these problems, or only the agency trying to copy them? Have they ever experienced the same issues with previous models, or just this one?

        From there, I'd also ask are the relative complexities of flight and road travel similar or even comparable? I'd hazard a guess that flight is more complex and harder to get to an accurate level, but I'm not sure.

        "If we can't get fly by wire 100% correct for one vehicle, what are the chances we will get a fully automated transport system correct for millions of vehicles, each with different handling characteristics?"

        Well, is that what's actually being proposed? Are they actually saying that they will drop automated systems into existing cars, or that they'll be working with manufacturers on new cars? The latter doesn't sound particularly far fetched, and the handling would be designed with this system in mind.

        As for 100% - just look at the numbers of recalls for major faults we get now. As long as the systems have sufficient failsafes and reliable human overrides if things do go wrong, I don't see it being any more dangerous than the faults that actually lead to deaths under the current paradigm.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 17 Jun 2015 @ 7:46am

          Re: Re: Re: We Need to Aim for Perfection not this Crap Story

          I'd hazard a guess that flight is more complex and harder to get to an accurate level, but I'm not sure.

          Aircraft control software is on a par with vehicle engine management and stability software, but with a stress on reliability as an aircraft cannot stop in mid air. It is also capable of navigating between way pints, and making landing and take offs whilst riding a control beam. That is it is dealing with largely known environment, where the variables are wind speed and air temperature. It only needs very primitive sensing of its environment, like height above the ground.
          Autonomous cars on the other hand need continuous sensing of the external environment to establish their road position, detect obstruction, and detect traffic signals etc. This is a much more complex problem that flying an aircraft from a to b using a GPS to navigate a per-defined flight path, where obstructions are effectively unknown. The car problem is much more complex because of th external environment sensing and processing required.

          link to this | view in chronology ]

          • icon
            ottermaton (profile), 17 Jun 2015 @ 8:30am

            Re: Re: Re: Re: We Need to Aim for Perfection not this Crap Story

            It is also capable of navigating between way pints
            I dunno. I think that navigating between pints would get pretty dangerous after a while. ;-)

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 17 Jun 2015 @ 9:19am

              Re: Re: Re: Re: Re: We Need to Aim for Perfection not this Crap Story

              My excuse, the spell checker could not spot the spelling error. :-)

              link to this | view in chronology ]

          • icon
            PaulT (profile), 17 Jun 2015 @ 12:43pm

            Re: Re: Re: Re: We Need to Aim for Perfection not this Crap Story

            OK, thanks for the explanation :)

            link to this | view in chronology ]

      • icon
        nasch (profile), 17 Jun 2015 @ 8:25am

        Re: Re: We Need to Aim for Perfection not this Crap Story

        I think we are a long time away from a fully automated transport system.

        And we will never get there. There will always be pedestrians, bicyclists, etc. The only way you could have an all-autonomous system is if those roads/rails/whatever are physically segregated from the places where people walk and so on, and in such a way that it's impossible or at least not tempting for pedestrians to try to cross them. I don't see that happening.

        link to this | view in chronology ]

    • icon
      James Burkhardt (profile), 17 Jun 2015 @ 10:26am

      Re: We Need to Aim for Perfection not this Crap Story

      Other commentors have mentioned this general case, but i have some specific examples, and a better question. See, at a junior high (5th-8th grade) near my home, it has become common for the children to decide that playing frogger in the traffic is a fun past time. I have been in an accident because some kid mistimed his jumps, and a car had to swerve to dodge the kid. The real cost of that 4 car accident was potentially higher then if the car had hit the kid. So here's the real question, do we cause the multi car accident or do we hit the kid? The automated cars might be able to all swerve and reduce the multi-car accident's damage, but you can not eliminate the pedestrians and bicycles on the road and you can not predict the actions of those not tied into the automated network.

      link to this | view in chronology ]

    • icon
      JMT (profile), 17 Jun 2015 @ 6:14pm

      Re: We Need to Aim for Perfection not this Crap Story

      Wow, that's quite the Utopian vision you have there...

      "The realities of automated transport preclude the scenarios depicted because if all the vehicles were automated then the school bus tragedy doesnt happen."

      But we will never get to a point where ALL vehicles are automated. There are very few technologies that are completely eradicated by a newer technology, so there will always be human-controlled vehicles out there.

      "You forget that the accidents are caused by humans who are not in their right mind and computers are always on the alert to do the right thing."

      Neither of these claims are true. Not even close.

      link to this | view in chronology ]

    • identicon
      Nicole N, 23 Jun 2015 @ 5:19am

      Re: We Need to Aim for Perfection not this Crap Story

      Nobody said the Trolley was occupied, they just mentioned that it was the Express Route Trolley. The decision is between your child or a bus of children. simple solution, if you switch to the alternate you perform your job (switch on/switch off) good thing that college education earned you this low wage work doing something so remote from the degree you worked hard to earn and still owe massive sums of money in education debts. So switch the switch to the alternate, but first set off an emergency alarm and write down the situation quickly. Then you can file lawsuit against the Trolley company for neglect since they did have this trolley system signed off with the city and respective public health and safety officials. The Trolley company will then sue the bus operation for endangerment along with all the children's parents. both company's get lawyered up and battle it out over several years until the bus company is shuttered and the children have to WALK TO SCHOOL all the while PAYING ATTENTION TO THEIR SURROUNDINGS. Therefore the saved children will be more thoughtful of safety come the time they are old enough to fix all the dangerous and idiotic stuff their parents and their parent's parents created and caused.

      WHAT WAS THE BUS COMPANY THINKING WHEN THE SALES REP FROM BLUE BIRD SOLD THEM THOSE VERY SAME BUSE; SAYING THEY WERE THE MOST RELIABLE AND SAFEST BUSES AROUND? Oh wait, they sales rep was pushing so hard on the sell that the question of impact with a trolley was dodged over and over again.

      THERE IS NO EXCUSE: Those transportation systems are too dangerous, all transportation systems actually, get in line and have your legs and arms cut off everybody, it is for the greater good of society to not make these horrific creations, reproduce, breathe, eat, drink, shower, contribute in anyway possible, or even to think.


      The fact is that we live dangerous lives, getting in your car, clothes, shower, kitchen, oven, microwave, dishwasher, mother's basement in the middle of nowhere with just an internet hookup and computer for company, cubicle, elevator, hat, and trash dumpster compactor is very risky. I hear you can choke to death on many things like water and food! you should not consume such lethal items especially bubble gum.

      Just how philosophical of an argument must be made to realize that maybe, just maybe, those who pose such arguments should be forced to play them out on themselves before shoving it down other peoples throat's like a phallic symbol of how much they love to play you around like an inflatable doll

      link to this | view in chronology ]

  • identicon
    cancan, 17 Jun 2015 @ 5:11am

    All passengers should be forcibly ejected and the auto should dump core.

    or

    go-go-gadget car-o-copter

    or

    ride the bus

    link to this | view in chronology ]

  • icon
    PaulT (profile), 17 Jun 2015 @ 5:12am

    "For example, murder is always wrong, and we should never do it."

    ...except there are definitely circumstances in which it's the preferred or only action. These situations are very rare, but they do exist.

    link to this | view in chronology ]

    • identicon
      Bengie, 17 Jun 2015 @ 5:21am

      Re:

      They sued the wrong verb. Murder indicates a certain amount of malice.

      "Murder is the killing of another person without justification or valid excuse"

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 5:22am

    Talk about the blue screen of death!

    What about the scenario where the car, while calculating all possible outcomes, blue screens? Then everyone dies! When the Police show up, they notice the screen in the autonomous car is asking if you want to start in "Safe Mode".

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 17 Jun 2015 @ 7:47am

      Re: Talk about the blue screen of death!

      The thought of something so critical running Windows is truly frightening...

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 5:24am

    Before arguing this obviously inflammatory question, how about coming up with a few plausible scenarios where this question actually would come up?

    The first job of any self-driving car is to drive safely, at all times. That means to simply never put the car, passengers or anyone else in a situaton that it cannot safely abort from. That includes ensuring sufficient distances and low enough speed that it basically can't hit anything with any serious force. You know, the stuff every human driver is supposed to do but due to our inherent impatience and severely broken risk assessment abilities we never do.

    That means going around blind curves and over hill crests slowly enough that it can stop within the distance it can see. It means waiting with infinite patience behind strolling pedestrians, playing children and wobbly cyclists. It means not overtaking that slowpoke until it's really, provably safe to do so.

    The case of "20 kids suddenly appearing in front of your car in the middle of a bridge while you're going 80 mph and your brakes suddenly stop working" is simply never happening and is at best mere philosophical masturbation. At worst, it acts as fuel for politicians to obstruct and delay the biggest driving safety revolution ever.

    link to this | view in chronology ]

    • icon
      Ninja (profile), 17 Jun 2015 @ 5:33am

      Re:

      They will probably "see" other vehicles coming long before the cameras can capture images of said vehicles so I don't believe we will necessarily see lower speeds or greater distances between cars. Which will mean efficiency but not necessarily at the cost of security. Still, such failures in the system or of one of the components (one of the vehicles or multiple vehicles at a time) may and will happen. The issue is not that significant because in the end the answer is simple: the whole group of actors involved (all vehicles and systems) should pursue the route with less victims/damage. Simple as that.

      link to this | view in chronology ]

    • icon
      John Fenderson (profile), 17 Jun 2015 @ 5:45am

      Re:

      "and low enough speed that it basically can't hit anything with any serious force"

      So the cars will never exceed 5 MPH? Nobody would buy or use one of those.

      link to this | view in chronology ]

    • icon
      PaulT (profile), 17 Jun 2015 @ 5:51am

      Re:

      "The case of "20 kids suddenly appearing in front of your car in the middle of a bridge while you're going 80 mph and your brakes suddenly stop working" is simply never happening and is at best mere philosophical masturbation"

      Funny, I don't see anyone posing that particular scenario apart from you. The closest is the trolley question, but there's no bridge and the brakes on the moving vehicle are working fine. The conundrum is about the best decision to make when all options will lead to serious injury or death, not what you posed. Why not address the things people have actually said rather than a comical exaggeration?

      If you want another realistic example, what about the "criminal trying to escape from police swerves into oncoming traffic on the freeway" or the "horse bolts from a nearby field, and the only way to avoid it could cause a bus to crash" scenarios? Not everyday occurrences perhaps, but those things happen. A human driver will always react with an eye toward self-preservation. A computer doesn't have that urge, so what do you program it to save? The person in its own vehicle or the greater number of lives outside? Or, are you saying that a vehicle should never go fast enough for split-second timing to be necessary under any circumstance?

      "At worst, it acts as fuel for politicians to obstruct and delay the biggest driving safety revolution ever."

      If you want people to stop talking about things that a politician might distort for political gain, we won't have much left to talk about.

      link to this | view in chronology ]

    • icon
      nasch (profile), 17 Jun 2015 @ 8:30am

      Re:

      Your point seems to be that these scenarios are unlikely. This is true, but when autonomous cars collectively are driving billions of miles per year, unlikely events are going to happen.

      link to this | view in chronology ]

    • icon
      oldschool (profile), 17 Jun 2015 @ 7:42pm

      Re":... and your brakes suddenly stop working" is simply never happening and is at best mere philosophical masturbation."

      Aww come on, electronic stuff goes pop every day. And if your self driving autonomous car should philosophically masturbate, wouldn't it go blind? Then it wouldn't see those 20 kids appear in front in the middle of a bridge and POW. I can just imagine the headlines...

      link to this | view in chronology ]

  • icon
    Dave (profile), 17 Jun 2015 @ 5:25am

    So was this bioethics team also responsible for UAB killing its football program? Because they're totally going to lose their funding if they did.

    link to this | view in chronology ]

  • icon
    SimonN (profile), 17 Jun 2015 @ 5:35am

    Other Question

    Perhaps a question that has yet to be addressed is that in which the autonomous vehicle takes an active role in preventing an accident that it is predicting will happen - the car travelling at high speed that will collide with the school bus [does the software recognise school busses or merely collisions?] and so drives itself to intercept the incoming vehicle, causing a collision but saving the bus.

    How does one evaluate that?

    link to this | view in chronology ]

  • identicon
    Starcat, 17 Jun 2015 @ 5:41am

    The paradox is...

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    So the cars can't have accidents that injure the passengers - they will destroy themselves first.

    link to this | view in chronology ]

    • identicon
      Misha, 17 Jun 2015 @ 7:57am

      Re: The paradox is...

      The conflict between the rules and reality was pretty much the point of half of Asimov's stories. Asimov's rules as (fictionally) implemented were more complex than the human-readable version, and amounted to what is mentioned in the article -- when faced with situations in which all potential actions violate the rules, it would fry their brain, and they'd just shut down, though they might attempt some least-bad action before going down completely. But it wasn't because they were "choosing" self-destruction, that's just what happened when the rules couldn't resolve the situation. The later, smarter robots could handle more nuanced dilemmas and added the zeroth law, protect humanity, which amounted to preferring the Utilitarian solution

      link to this | view in chronology ]

    • identicon
      DigDug, 17 Jun 2015 @ 8:02am

      Re: The paradox is...

      Don't you recall the easy way out?

      Just define "human being" as Solarian, and then it won't matter if a plain old Terran is killed.

      That's what the government has done, redefine person as corporation, with only one slot for noun/pronoun available, real human beings don't count unless they are in the top 0.01% (yes, 1 tenth of 1 percent) richest corporate bag of mostly water.

      link to this | view in chronology ]

  • icon
    Eponymous Coward (profile), 17 Jun 2015 @ 5:47am

    "As such, it seems automated cars will either follow the utilitarian path, or perhaps make no choice at all (just shutting down when encountered with a no win scenario to avoid additional liability)."

    In the rare case of a situation where your AutomaToyota cannot avoid serious injuries/fatalities, it will immediately shut down and let inertia decide?

    Inaction breaks the First Law, and we can't have that.

    link to this | view in chronology ]

  • icon
    Eponymous Coward (profile), 17 Jun 2015 @ 5:52am

    The Final Test

    Soon, every autonomous car rolling off the line will have to make one last stop before leaving the factory. They will be required to take virtual command of the Kobayashi Maru to gauge how they cope with a no-win situation.

    The "James Tesla Kirk" models will give the system fits, though...

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 6:01am

    Your car killing you to save others is 100% unrealistic

    This situation really isn't realistic at all, for several reasons.

    1) The driver would need to be in a situation where there's no way to avoid an accident. This would almost certainly involve one or both of the following
    1a) Reacting too late
    1b) Losing control of the car

    2) The driver would need to still need to have enough time to react and control of their car to make such a decision.

    3) You're assuming the other people involved in the accident won't have time to react either, and that their reactions won't change the outcome of your situation.

    Item #2 is pretty much impossible when Item #1 occurs. Either you already reacted too late and don't have time to avoid an accident. Or you're about to get into an accident because you don't have control of your car (such as driving on an icy or slippery road).

    Not to mention there's item #4, the fact that even a computer won't be able to tell in a split second what will happen when the car hits something.

    Will pieces of your car go flying off and hit those pedestrians you wanted to avoid hitting?

    How well will the airbags and other safety features actually work in your car and other cars involved in the accident at preventing injuries?

    Will you getting into an accident cause the people behind you to get into an accident to, because they couldn't stop in time after your accident blocked the road?

    link to this | view in chronology ]

  • icon
    JamesF (profile), 17 Jun 2015 @ 6:07am

    It’s really a liability issue. For example, its driving along the road, passing a pedestrian, with oncoming traffic in the other lane, when something runs out into the road to close to stop. If it swerves, then it’s made an active decision to take someone else out, which the manufacturer could potentially be held liable for. If the car simply slams the brakes on, and hits the kid, it’s made no decision, its simply responded as best it could to a situation someone else created.

    link to this | view in chronology ]

  • icon
    Paul Renault (profile), 17 Jun 2015 @ 6:09am

    The car could use a complex algorithm tweaked for the greatest benefit to humanity

    Rule 1: If the cars Bluetooth system detects that the driver is wearing an Apple Watch, sacrifice the driver.

    Sub-rule 1.1: If the driver is wearing a gold Apple Watch, route all GPS directions through very cliffy roads where rock falls are common.

    link to this | view in chronology ]

  • identicon
    ai, 17 Jun 2015 @ 6:12am

    Captain Obvious

    The question isn't just ethical, but also a marketing one - would you knowingly trust your life to something that doesn't put your safety first under certain conditions?

    link to this | view in chronology ]

    • identicon
      AJ, 17 Jun 2015 @ 6:24am

      Re: Captain Obvious

      Depends. If were talking about a machine with a reaction time that is far far beyond anything i could ever hope for and... is driving a car that it can control down to individual wheel breaking for complex evasive maneuvers and... can diagnose problems and run down to a car shop in the middle of the night to fix itself while i sleep and.... can detect, mitigate, and possible avoid mechanical and/or environmental failures when driving by detecting objects on the road that i can't see ... well:

      I could argue that in the off chance that it has to make a decision that involves putting other things above my safety is far outweighed by the overall statistical decrease in the probability that I will ever be put in that position...

      link to this | view in chronology ]

      • icon
        nasch (profile), 17 Jun 2015 @ 8:33am

        Re: Re: Captain Obvious

        I could argue that in the off chance that it has to make a decision that involves putting other things above my safety is far outweighed by the overall statistical decrease in the probability that I will ever be put in that position...

        But what if the competitor does all that, and also promises to put your life first in an emergency?

        link to this | view in chronology ]

        • identicon
          AJ, 17 Jun 2015 @ 9:06am

          Re: Re: Re: Captain Obvious

          "But what if the competitor does all that, and also promises to put your life first in an emergency?"

          Obviously there will need to be some kind of industry standard. Even now; Cars can't build safety systems that protect the vehicle and the risk of others outside of the vehicle... for example; You can't have explosive armor on your vehicle to protect you from fender benders.... although the visual i just got typing that was awesome :)

          link to this | view in chronology ]

          • icon
            JP Jones (profile), 17 Jun 2015 @ 1:55pm

            Re: Re: Re: Re: Captain Obvious

            You can't have explosive armor on your vehicle to protect you from fender benders.... although the visual i just got typing that was awesome :)

            This needs to happen. Now.

            link to this | view in chronology ]

          • identicon
            Anonymous Coward, 17 Jun 2015 @ 2:09pm

            Re: Re: Re: Re: Captain Obvious

            While not explosivethe Blaster. is an impressive anti-hijack device.

            link to this | view in chronology ]

    • icon
      PaulT (profile), 17 Jun 2015 @ 6:46am

      Re: Captain Obvious

      "would you knowingly trust your life to something that doesn't put your safety first under certain conditions?"

      Yes, just as I trust my life to ships, trains, planes and other forms of transport. Whether due to money causing corners to be cut, publicity concerns causing information about known problems to be suppressed or the occasional outright psychopath in charge of the vehicle (the Germanwings flight deliberately crashed by the co-pilot), I put my life at risk at the hands of others on a regular basis.

      But, the likelihood of those conditions actually threatening my life are still far lower than I face on the roads every day where humans are in charge of the vehicles. If the conditions discussed are equally low in probability when compared to mass transport (and by all accounts, most certainly lower), I'll be happy to take that trip.

      link to this | view in chronology ]

    • icon
      John Fenderson (profile), 17 Jun 2015 @ 8:44am

      Re: Captain Obvious

      "would you knowingly trust your life to something that doesn't put your safety first under certain conditions?"

      Absolutely, if there was a very good reason for it (such as avoiding killing a bunch of people).

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 6:28am

    Automated cars driven by computers, with no human in control is a VERY BAD IDEA.

    link to this | view in chronology ]

    • icon
      PaulT (profile), 17 Jun 2015 @ 6:39am

      Re:

      Then it's a good thing that none of the cars being talked about lack human control options.

      link to this | view in chronology ]

    • icon
      aldestrawk (profile), 17 Jun 2015 @ 7:31am

      Re:

      Control freak! Just learn to relax and let Skynet handle all the driving. Seriously, even if the autonomous cars did occasionally cause accidents, there would still be far fewer than those caused by humans. This produces the least overall harm. You are just worried that your car will kill you and your family and you'll be innocent victims without another human to blame.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 6:29am

    The idea is or will be a moot point in next decades. As more vehicles become automated, they will eventually talk to one another. This will create a single autonomous brigade working and moving together. Cars will separate, creating space for another car to move from one lane to the next. All autonomous vehicles will work as one to avoid all collisions. Individual system failures will be relayed to all vehicles in the vicinity and theoretically (say the breaks have a catastrophic failure) work to either slow (consider 3 nearest cars surrounding it and reducing speed through safe and efficient contact) while other vehicles creating the space needed. The possibilities are endless and should be embraced. Computers will not decided who to save. Computers will decide how to save everyone.

    link to this | view in chronology ]

    • icon
      John Fenderson (profile), 17 Jun 2015 @ 8:46am

      Re:

      "The possibilities are endless and should be embraced."

      The possibilities for evil are equally endless and should not be ignored when deciding whether or not to embrace the technology.

      link to this | view in chronology ]

  • identicon
    Bruce, 17 Jun 2015 @ 6:29am

    Remember Asimov

    Law 0: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
    As you should consider humanity a set of individuals, whatever the number, then the one is smaller than the ones.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 7:09am

    I would allow the car's computer to be configured for different automotive ethics profiles, each providing a different balance of protection for the occupants, the vehicle, and other drivers/pedestrians. Automakers and insurance companies will pick the default profile, and if drivers want to use a different profile for whatever reason, they can do so, but their insurance rates may be adjusted accordingly. If the driver's choice is one that increases the risk of more damage or more people being sent to the hospital (or the cemetery) then they will have to subsidize those costs.

    link to this | view in chronology ]

  • icon
    aldestrawk (profile), 17 Jun 2015 @ 7:10am

    Would you like to play a game? let's play chicken.

    "...calculating the possible trajectory of two cyclists blotto on Pabst Blue Ribbon and crystal meth."

    This is the real question of interest. I cannot see a scenario where there is a greater/lesser evil choice in an unavoidable accident. Cars have brakes and are supposed to allow enough distance to brake without colliding in the event of unforeseen incidents. Humans often makes things worse, for themselves and others, by veering or veering and braking at the same time. The autonomous vehicle should be able to sense that the braking system is functional.

    If you really want to test the ability of software to take action that will produce the least harm, have it play modified games of chicken (real or virtual). Chicken, both with other traffic and without, where the opposing driver's actions are unpredictably:
    1). completely random.
    2). distracted for a random amount of time before realizing that a collision must be avoided.
    3). evilly intent on causing an accident no matter what you do.
    I think you'll find that most of the time braking without veering produces the least harm. There may be some narrow situations where you can avoid a collision. However, if there are multiple cars veering things can get unpredictably ugly.

    A case in point: the Bruce/Caitlyn Jenner crash from last February. In this multiple car accident, Jenner was the person primarily at fault. However, Kim Howe, the woman who was killed driving the Lexus, had just started to veer into the center lane while braking to avoid hitting the Prius. When Jenner's Cadillac hit the Lexus it was propelled in the direction the front wheels were aligned. This meant the Lexus traveled across the center lane into the opposing lane. If Howe had not veered she would have been forcibly rammed into the Prius in front of her. At the moment of the first impact, the Cadillac was going 38 mph and the Lexus about 19 mph. That would have been a very survivable accident, perhaps without any serious injury.

    link to this | view in chronology ]

    • icon
      nasch (profile), 17 Jun 2015 @ 8:39am

      Re: Would you like to play a game? let's play chicken.

      A case in point: the Bruce/Caitlyn Jenner crash...

      Another case in point: I was driving along a two-lane road at night. A car coming the other way flashed their brights (accidentally as it turned out) and slammed on the brakes. A moment later a deer jumped in front of me. I swerved violently to the right and then back, avoiding the deer. Had I followed your advice of just continuing straight and braking, I would have hit it. So there is no universally right answer - sometimes "brake and hold" is the best move and sometimes it isn't.

      link to this | view in chronology ]

      • icon
        aldestrawk (profile), 17 Jun 2015 @ 10:22pm

        Re: Re: Would you like to play a game? let's play chicken.

        Speed is a very important factor in the decision to swerve. If you're going 60 mph that maneuver to avoid a deer will likely cause your vehicle to roll. The problem is, unless you have trained specifically for such maneuvers, your split second decision may not take into account the speed your going. Also, if you had somebody too close behind you, their actions might kill you. It's all very hard to predict.

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 7:20am

    The Justice System will clear this up

    When the moment comes when somebody is injured or killed through action or inaction of a car with the ability to drive autonomously somebody will sue the car owner and the car company.
    The estate of the cars passengers will argue that other people should have died and it's the cars (and the car companies) fault for not killing the group of toddlers instead while the family of the single toddler which would be hurt in the inverse case would argue that the occupants of the car deserved to die.

    Money will change hands and the courts will finally rule that all cars (automobiles and auto-automobiles) are illegal.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 17 Jun 2015 @ 7:26am

      Re: The Justice System will clear this up

      You dont sue the vehicle occupants. There is no money in it. Software vendor/car manufacturer. That is where the lawyers will go.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 7:21am

    Yes, if it means I get a discount on the purchase

    The market-based solution is that you can buy a cheap version that is programmed for the utilitarian approach of saving the most lives, or the more expensive "selfish edition" that will prioritize savings its occupants above saving anyone else. The selfish edition will still try to save non-occupants when that does not conflict with its obligation to protect its occupants. We could even have tiers of selfish edition, where higher tiers place great emphasis on getting the occupant through unharmed (as opposed to alive, but injured).

    link to this | view in chronology ]

  • icon
    Chris Rhodes (profile), 17 Jun 2015 @ 7:27am

    Interesting, But Has Obvious Result

    While the idea of what a self-driving car should do is no doubt an important question to philosophers, in practical terms, the market is going to pick one, and I guarantee you it isn't going to be the one the promises to sacrifice the lives of you and your family to algorithmic utilitarianism.

    link to this | view in chronology ]

  • icon
    Mason Wheeler (profile), 17 Jun 2015 @ 7:32am

    It seems to me there's only one real answer to this philosophical question: the self-driving car's highest duty must always be to keep the people inside safe.

    It has to be this way, not because of ethics or moral concerns, but simply because no one would buy a car that's programmed to sacrifice them in an emergency!

    link to this | view in chronology ]

  • icon
    eaving (profile), 17 Jun 2015 @ 7:41am

    The basic question also overlooks the economics of human nature. Given that a computer might opt to sacrifice you and your children would you chose to buy it? Even knowing its in general safer I think most people would opt for a manual car that would let them save their children over half a dozen pedestrians for example. The software may need to be passenger centric to gain market traction and make the roads on average safer even if that situation itself isn't.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 7:52am

    It is important...

    There needs to be a standard set. A standard that all agrees on. You know as well as I, that many people do not like change and especially when it is one that takes some sort of control away.
    If no standard is set and followed vigorously, then this revolution, or whatever you want to call it, will hardly start before it is scrapped. If self-driving cars put the death toll down to 0.2% of what it is today, people would rage in the streets against this new "robot uprising" when the first death happened because of a decision made by such a car.
    I do think that this scenario and others cannot be discussed enough, if for no other reason than to make sure it would even happen. It would be revolutionary progress towards ridding ourselves of the dangers in traffic and bringing that extremely huge number of deaths down across the world.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 8:01am

    How in the world can someone say "Computers driving my car? Oh NO NO NO - that is too scary" and yet be willing to drive on the same road of humans driving cars while eating/smoking/talking/texting.

    Have a sense of reality.

    link to this | view in chronology ]

  • icon
    OldMugwump (profile), 17 Jun 2015 @ 8:01am

    The car (or driver) can never be SURE

    The real problem with these hypothetical scenarios is that in the real world the car, or driver, or trolley switch operator, can never be 100% sure what the consequences of their actions will be.

    Maybe the school bus is empty.

    Maybe throwing the fat man onto the tracks won't accomplish anything other than killing the fat man.

    In the face of uncertainty, I think there's a moral argument in favor of avoiding certain harm, even if that increases the chance of uncertain harm.

    [Practical answer: It doesn't matter - self-driving cars will still be safer for the passengers either way.]

    link to this | view in chronology ]

  • icon
    OldMugwump (profile), 17 Jun 2015 @ 8:05am

    Danger - Death by Trolling

    Another thing-

    If cars are programmed to minimize total casualties (rather than protect passengers), it may be possible to troll a car into killing its passengers.

    Once the behavior of the self-driving cars is generally understood, a murderer could deliberately drive another vehicle such that the car will think it has no choice but to kill its passengers. (Drive into a tree, off a cliff, etc.)

    link to this | view in chronology ]

    • icon
      Ninja (profile), 17 Jun 2015 @ 8:54am

      Re: Danger - Death by Trolling

      I thought about it but since said person would be outside the network the vehicles could prioritize the lives and calculations they can predict. So the top priority of the system should be to preserve all lives. I'd think it would be quite hard to troll such system seeing it would be able to calculate possible scenarios much faster, no?

      This actually poses another question: will we allow humans to drive in a fully automated environment or will the auto pilot take over when reaching such areas?

      link to this | view in chronology ]

      • icon
        nasch (profile), 17 Jun 2015 @ 9:29am

        Re: Re: Danger - Death by Trolling


        This actually poses another question: will we allow humans to drive in a fully automated environment or will the auto pilot take over when reaching such areas?


        If humans are allowed to drive there, then it isn't a fully automated environment.

        link to this | view in chronology ]

        • icon
          Ninja (profile), 17 Jun 2015 @ 11:56am

          Re: Re: Re: Danger - Death by Trolling

          It can work on full auto. The question is, will we allow it to be hybrid?

          link to this | view in chronology ]

  • identicon
    Planned-opolis, 17 Jun 2015 @ 8:32am

    Planned-opolis

    somehow everybody think they are so important that they will have their own self driving cars... and not be in the bus...
    https://youtu.be/IRFsoRQYpFM?t=2m14s
    the actual plan is of course that only the elites will have such cars, while everybody will live in tightly packed cities with public transportation as THE ONLY choice.
    Now with a real context, the bus full of serfs/children can crash and burn because the elites are in the self driving cars.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 8:32am

    This is why you should drive open source

    link to this | view in chronology ]

  • identicon
    David Bolton, 17 Jun 2015 @ 8:48am

    How would a self driving car assess casualties?

    I can't see how it could determine if hitting car A or car b would cause fewer casualties. There are so many factors, such as size, speed of cars, numbers of occupants and determining those factors may not even be possible. Hitting a bus (even if coloured yellow) might cause no casualties to the buses occupants.

    Also given how good sd cars are at spotting potential threats, is this scenario even possible?

    link to this | view in chronology ]

    • icon
      Ninja (profile), 17 Jun 2015 @ 8:59am

      Re: How would a self driving car assess casualties?

      Considering most of this will be processed in a system I don't think it would be that hard. Even outside zones covered by wireless systems the cars could still produce signals that would be received hundreds of meters before a possible crash. I guess you can narrow this to a situation where the cars are shut from their communications grid. Then you need to deal with what you have in your hands and the car should prioritize the passengers. You can only ask the questions posed by the article when you have means to grasp the situation I'd infer.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 9:04am

    ok, i'm blasting down some road and see a school age kid ahead in harm's way.
    i can't avoid the kid without nosing head-on into a heavily laden truck.
    i know i have a terminal illness, say, and i know i don't have long to live.

    i choose to save the kid, i hope.
    i know t.e. lawrence chose to save a couple of kid's lives, and my respect for the man makes me hope i would do the same.

    but what about my self-driving car?
    does it check facial recognition database and determine that kid is no good and is one the authorities would like to get rid of anyway?
    but i happen to know the kid and strongly believe he'll come around, so i want to save him.

    how do i get that stupid car to do what i want it to do?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 9:07am

    I am more worried about these automated cars being hacked into and taken control of.

    link to this | view in chronology ]

  • identicon
    Calvin Smith, 17 Jun 2015 @ 9:18am

    Imagination Gap

    There's a 'small' problem involved in designing any computer system that I call the Imagination Gap.
    This states that a system designer will build a system that covers all eventualities that he can imagine, unfortunately there are more possibilities than anyone can imagine.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 17 Jun 2015 @ 9:57am

      Re: Imagination Gap

      Good programers also design for unforeseen conditions. Bad ones don't. Hence, poorly designed programs fail in such situations. Microsoft Windows is an example of poorly designed software.

      link to this | view in chronology ]

      • icon
        John Fenderson (profile), 17 Jun 2015 @ 12:39pm

        Re: Re: Imagination Gap

        "Good programers also design for unforeseen conditions."

        Not really. If you can design for a circumstance, it's not "unforseen". What good programmers actually do is design their software so that it fails gracefully rather than catastrophically so that when the unforeseen circumstance happens, the damage is not made worse.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 17 Jun 2015 @ 12:44pm

          Re: Re: Re: Imagination Gap

          "Not really."

          That's poor programmers generally believe. They're wrong, but they use that excuse.

          link to this | view in chronology ]

          • identicon
            Anonymous Anonymous Coward, 17 Jun 2015 @ 1:05pm

            Re: Re: Re: Re: Imagination Gap

            un·fore·seen
            ˌənfôrˈsēn/Submit
            adjective
            not anticipated or predicted.
            "insurance to protect yourself against unforeseen circumstances"
            synonyms: unpredicted, unexpected, unanticipated, unplanned, not bargained for, surprising
            "the problems with the bus were, of course, unforeseen"

            So, by your thinking 'good' programmers have some sort of magical second sight that allows them to predict the unpredictable? Ever met one?

            If a programmer programs to cover all known contingencies, then the issues will be with the unknown. Your version has 'good' knowing the unknown and they must be in the running for deity of the century.

            link to this | view in chronology ]

          • icon
            John Fenderson (profile), 17 Jun 2015 @ 2:05pm

            Re: Re: Re: Re: Imagination Gap

            "They're wrong, but they use that excuse."

            Then please explain how anyone can design something to handle situations that the designers can't see coming. Any example will do.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 18 Jun 2015 @ 7:32pm

              Re: Re: Re: Re: Re: Imagination Gap

              Then please explain how anyone can design something to handle situations that the designers can't see coming. Any example will do.


              Sigh. Very, very simple example, since any will do. Take a simple text editor accepting up to a 1 megabyte file as input. I wont bore you with the details, but explicitly anticipating and testing all possible such input files could not be completed before the heat exhaustion of the universe. Yet, the program can successfully edit files the programmer never foresaw.

              Now, you may not believe it, but such programs actually exit and regularly run without "failing gracefully". Of course, there are poor programmers out there who actually couldn't write such a program to save their lives. And when their program crashes because it came across an input file containing the string "ldu9o0438fjajiofc" they'll protest that it isn't their fault because they obviously couldn't have foreseen that a particular file would contain that particular string. And it is absolutely true that they could not foresee all possible strings that the input file might contain. Still, in the eyes of a professional computer scientist, it is extremely poor design.

              link to this | view in chronology ]

              • icon
                nasch (profile), 18 Jun 2015 @ 8:29pm

                Re: Re: Re: Re: Re: Re: Imagination Gap

                Take a simple text editor accepting up to a 1 megabyte file as input. I wont bore you with the details, but explicitly anticipating and testing all possible such input files could not be completed before the heat exhaustion of the universe.

                That's an idiotic example. The editor is designed to handle all possible inputs of a given character set up to the maximum allowed size. Someone typing a character string the developer didn't think of is not an "unforeseen event".

                link to this | view in chronology ]

          • icon
            JP Jones (profile), 17 Jun 2015 @ 2:10pm

            Re: Re: Re: Re: Imagination Gap

            That's poor programmers generally believe. They're wrong, but they use that excuse.

            Apparently poor programmers understand the definition of words. If you plan for something, then you, by definition, acknowledge that it is a possibility. That means the thing is no longer "unforeseen." You foresaw it as a potential issue.

            For example, if someone had programmed the Mars Climate Orbiter to convert metric to imperial units if there was a conflict, then it wouldn't have been an unforeseen problem (and would have fixed itself). It wasn't anticipated, therefore a $651 million operation ended up disintegrating into the Martian atmosphere.

            Obviously you should try to handle as many eventualities as you can, and then build in error checking to try and make unexpected bugs cause the least amount of issue (and preferably generate a log to identify where the failure was). But no matter how skilled the programmer is they cannot create software solutions to directly handle unforeseen problems, they can only create general error handling to minimize unexpected issues.

            link to this | view in chronology ]

            • icon
              nasch (profile), 17 Jun 2015 @ 2:28pm

              Re: Re: Re: Re: Re: Imagination Gap

              I heard a story of someone brought in front of management and asked to identify the unforeseen problems that could jeopardize a project. Ummm....

              link to this | view in chronology ]

            • identicon
              Anonymous Coward, 18 Jun 2015 @ 7:47pm

              Re: Re: Re: Re: Re: Imagination Gap

              "Apparently poor programmers understand the definition of words."

              How so?

              "For example... a $651 million operation ended up disintegrating into the Martian atmosphere."

              Another example of sterling programming, eh? Pardon me if I tend to disagree.

              link to this | view in chronology ]

          • identicon
            Anonymous Coward, 17 Jun 2015 @ 2:22pm

            Re: Re: Re: Re: Imagination Gap

            You must be a manager who believes that a problem is solved by telling other people to solve it. If the problem, at least in general is not described, then software cannot be written to deal with it. For example, if a car is not programmed to recognize a tornado, it will not try to avoid it.

            link to this | view in chronology ]

          • icon
            JMT (profile), 17 Jun 2015 @ 6:19pm

            Re: Re: Re: Re: Imagination Gap

            It's a bit hard to defend an argument that depends entirely on an incorrect definition of a common word. I'd just stop there if I were you.

            link to this | view in chronology ]

  • identicon
    Ragnarredbeard, 17 Jun 2015 @ 9:49am

    "Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"

    What kind of irresponsible asshole brings his kid to work and then lets him/her run around unsupervised? Guy is probably a bad parent from go, and is lucky his kid has survived this long. Pull the switch, run the kid over, and prevent his poor genes from filtering down to the next gen.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 10:14am

    The Future Capitalistic Solution

    The capitalistic solution would be the one that minimizes monetary losses. So, perhaps in the future, everyone will need to have an official monetary values to them by the government. Furthermore, everyone will need to wear GPS trackers so that their location will always be known. Finally, driving computers will need to have access to the databases containing all that information so that they can be aware of the location and values of everyone around them. The computer can then choose the course of action that minimizes losses, or maximizes profits, as the case may be.

    I would imagine that socialists and other "anti-capitalists" would tend to have lower personal values assigned.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 17 Jun 2015 @ 10:17am

    Should Your Self-Driving Car Be Programmed To Kill You If...

    you are declared to be a terrorist?

    link to this | view in chronology ]

  • icon
    Killer_Tofu (profile), 17 Jun 2015 @ 10:27am

    Perspective

    Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience.

    If only this was shouted every time somebody brought up the threat of terrorists. Perspective would help people realize that giving up our freedoms is NOT a fair trade.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 17 Jun 2015 @ 10:45am

      Re: Perspective

      autonomous cars versus driving is not really a freedom. I mean do you manage your air conditioner? furnace?

      link to this | view in chronology ]

  • identicon
    Phil, 17 Jun 2015 @ 11:49am

    I look forward to griefing self driving cars.

    link to this | view in chronology ]

  • identicon
    jebstone, 17 Jun 2015 @ 1:23pm

    What's all this academic discussion about?

    Google has long since solved this deceptively simple puzzle of who lives and who dies: it's called "real-time bidding."

    link to this | view in chronology ]

  • identicon
    WaitWot, 17 Jun 2015 @ 8:25pm

    Selling points

    Liability to the automaker will dictate who dies.

    Family of four in the vehicle vs one person walking along the street ... guess who dies.

    One person in the vehicle vs a group of people waiting by the roadside for a bus ... guess who dies...

    link to this | view in chronology ]

  • identicon
    Jab, 17 Jun 2015 @ 11:55pm

    Adjustable ethics

    Why should the cars choices be set in stone. Maybe there would be an ethics setting.
    eg
    1 - Safe the driver no matter who else get killed.
    2 - Safe the driver if less than 5 people are killed.
    3 - Never kill any one to safe the driver
    etc.

    Put the choice in the drivers hand.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 18 Jun 2015 @ 12:15am

    The duty of the car should be to the driver/owner of the car.

    The lives of other people are up to them if they are driving themselves or their cars if they have autonomous vehicles.
    Simple as that.
    Otherwise your car does not belong to you. It becomes a some kind of philosophical judge of morality and value of life (god) outside your control. Why would you pay money for such a device?

    Kind of like when you hire a bodyguard. Should he try to protect lives of others if you paid him to protect your life?

    As for the trolley problem, you save your child. It is your biological imperative.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 18 Jun 2015 @ 5:13am

    One person in the vehicle vs a group of people waiting by the roadside for a bus ... guess who dies

    that does it ... i'm going into the rent-a-rider business.

    on the way to work you stop by a roadside kiosk and load up.

    if you've got the money, honey, you can look like an indian train pulling in at your work.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jun 2015 @ 5:21am

    Simple: Program the car to value self-preservation. I certainly won't put my life in the hands of a computer that calculates me as 'expendable', and I suspect many others won't either. Programming the car to kill the driver in this sort of scenario is likely to prove a massive hurdle to adoption of this technology, assuming it doesn't outright kill it.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 19 Jun 2015 @ 6:38am

      Re:

      So by your logic, if the car has the option of risking the driver by hitting a lorry, or killing you as a pedestrian while avoiding the lorry, it should kill you.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 30 Jun 2015 @ 4:34am

    Why don't we start with planes instead of cars?

    No more TSA
    No more air traffic controllers
    No more delayed flights because the pilot was hung over

    People who don't mind a prostate exam before getting on a plane shouldn't mind being part of the experiment. As a pedestrian I don't like the idea of being an unwilling participant of this experiment.

    Also, planes are already so safe, any increase in accidents/fatalities will be much easier to see.

    link to this | view in chronology ]

    • icon
      PaulT (profile), 30 Jun 2015 @ 5:02am

      Re:

      "Why don't we start with planes instead of cars?"

      Because none of your reasons make any logical sense?

      "No more TSA"

      How would automating the piloting of an aircraft remove the need to determine the safety and security risk of passengers and the items they bring on board? The TSA might arguably be removed for other reasons but nothing an automated system would do would remove the need for them under their current remit.

      "No more air traffic controllers"

      Well, you know, unless you actually want a manual backup system in case of problems. The cars will have a manual override, you want to ensure that a pilot can't have a person on the ground ensuring that he can make a safe approach and landing in cases of emergency?

      "No more delayed flights because the pilot was hung over"

      Because that's the only reason why flights get delayed? Not, say, mechanical faults on the ground, medical attention or needing to get drunk passengers off the previous flight? Automating flights will increase the risk of mechanical failure, not reduce it, and you still have to deal with the hundreds of human beings inside the thing every flight.

      "As a pedestrian I don't like the idea of being an unwilling participant of this experiment."

      So, you'd rather a few hundred innocent civilians on board a plane be subjected to it instead? Better hope the problems caused by your plane with no internal or external manual navigation doesn't decide to crash land where you are, as well.

      Oh, and as a pedestrian you're already subject to the "experiment" of criminals, drunk drivers and many other people who cause deaths every year - people who automated driving will often remove as a risk factor.

      "Also, planes are already so safe, any increase in accidents/fatalities will be much easier to see."

      Except, the primary reason why they're so safe is because the consequences of failure are so much worse by an order of magnitude.

      I understand you being concerned about safety around this new technology, but your alternative is worse.

      link to this | view in chronology ]

  • identicon
    John Cox, 12 Aug 2016 @ 9:38pm

    30,000 people are killed every year in cars in the USA. That is deemed acceptable, for the "convenience" of cars.

    But what is the convenience or benefit that justifies another 30,000 people being killed every year with firearms in the USA?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.