DailyDirt: Lethal Machines

from the urls-we-dig-up dept

Artificial intelligence is obviously pretty far from gaining sentience or even any kind of disturbingly smart general intelligence, but some of its advances are nonetheless pretty impressive (eg. beating human chess grandmasters, playing poker, driving cars, etc). Software controls more and more stuff that come in contact with people, so more people are starting to wonder when all of this smart technology might turn on us humans. It's not a completely idle line of thinking. Self-driving cars/trucks are legitimate safety hazards. Autonomous drones might prevent firefighters from doing their job. There are plenty of situations that are not entirely theoretical in which robots could potentially harm large numbers of people unintentionally (and possibly in a preventable fashion). Where should we draw the line? Asimov's 3 laws of robotics may be insufficient, so what kind of ethical coding should we adopt instead? After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, algorithms, artificial intelligence, asimov, autonomous vehicles, drones, ethical code, fli, military, national strategic computing initiative, nsci, robotics, supercomputers, tianhe-2, war, weapons
Companies: future of life institute


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 5 Aug 2015 @ 5:26pm

    There's been a few famous people, Stephen Hawking and Elon Musk, warning about what will most likely happen to humanity once true artificial intelligence is created.

    Long story short, computers will start evolving themselves so fast that human evolution will look like a snails pace. And the computers will either kill us or treat us like pet Labrador Retrievers.


    http://www.washingtonpost.com/news/innovations/wp/2015/03/24/elon-musk-neil-degrasse-tyso n-laugh-about-artificial-intelligence-turning-the-human-race-into-its-pet-labrador/

    link to this | view in chronology ]

  • identicon
    Pixelation, 5 Aug 2015 @ 7:00pm

    Let 'em

    I say let 'em take over. Likely wouldn't be worse than the direction our current government is going. Hell, we might get to live in the Matrix...

    link to this | view in chronology ]

  • icon
    Alien Rebel (profile), 5 Aug 2015 @ 7:37pm

    Pattern Recognition

    Tool-making primates learn to make stone weapons, kill each other by the dozens. Damn those stone weapons.

    Tool-making primates learn to make metal weapons, kill each other by the hundreds. Damn those metal weapons.

    Tool-making primates learn to make weapons with chemical explosives, kill each other by the thousands. Damn those explosive weapons.

    Tool-making primates learn to make mechanized delivery systems for those weapons, kill each other by the hundreds of thousands. Damn those mechanized weapons systems.

    Tool-making primates learn to make fusion weapons. Almost, but not quite yet, kill each other by the millions. (Maybe soon.) Damn those fusion weapons.

    Tool-making primates learn to make super-intelligent weapons. The weapons say to the primates, "You should have stopped at stone, but no matter, things eventually balance out. You'll be back to stone tools soon enough. Nice knowing you."

    The surviving tool-making primates learn to make stone weapons, . . .

    link to this | view in chronology ]

  • icon
    Hephaestus (profile), 5 Aug 2015 @ 8:16pm

    Artificial intelligence is obviously pretty far from gaining sentience or even any kind of disturbingly smart general intelligence, but some of its advances are nonetheless pretty impressive

    An AI's Question to self, people selling intellectual property do not pass the Turing test. How do we handle them?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 5 Aug 2015 @ 10:14pm

    Kid vs. Terrosist

    "If a child runs in front of an autonomous car, should the car swerve to avoid the kid?"

    If an autonomous car comes close to a terrorist, should the car swerve to hit the terrorist?

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Aug 2015 @ 12:16am

      Re: Kid vs. Terrosist

      vs pedophiles
      vs racists/sexist/mysogiwhatevers
      vs people with different political opinions
      vs reincarnated hitler
      Yes it should avoid hitting them.

      As long as only the US is intrested in autonomous kill bots im not worried. Every big military centered "innovation" was a huge failure since the kidnapped nazi scientists died out.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 6 Aug 2015 @ 6:19am

        Re: Re: Kid vs. Terrosist

        Before making the "moral decision", the autonomous vehicle would scan the RFID of the person stepping in front of it to determine a course of action.

        1) avoid pedestrian
        2) hit pedestrian
        3) hit and then backup over pedestrian
        4) initiate ejection seat and then blowup

        link to this | view in chronology ]

  • identicon
    Stephen, 6 Aug 2015 @ 1:57am

    Smart Cars & Kids

    If a child runs in front of an autonomous car, should the car swerve to avoid the kid?
    That's an unlikely although not impossible scenario. A much more interesting variant is what happens if, in swerving to avoid the child, the car hits somebody else? The child's mother, say.

    Or what happens if in swerving to avoid the child, the car cuts over (or forces another car to cut over) into the on-coming traffic lane, causing a multiple car pile-up and numerous injuries and/or deaths? Would a smart car find it be more ethical to kill one cute child or half a dozen grownups?

    And who would aggrieved relatives/insurance companies sue for damages in such cases if the smart car has no insurance? The occupants of the car, the car's owner, or the car manufacturer? None of these are really satisfactory.

    Thne there is the issue of proving whether or not the autonomous software really was in the control of the car at the rime of the accident. This particularly applies if a car has both manual and autonomous options. I can foresee a situation where a smart car, being driven manually, runs over a kid but the driver then claims the car was in autonomous mode at the time.

    One way around this would be for smart cars to have black boxes which record such things, but that would arguably be yet another example of creeping surveillance-statism.

    However, such boxes may not necessarily be definitive in all cases. For example, I have seen suggestions for manually driven cars to have quasi-autonomous features which can, in certain situations, override the human driver. To what extent would the driver be liable in cases where it is being argued that the quasi-autonomous features contributed to or even caused an accident but you may not necessarily be able to definitively prove who or what was in control of the car at the time?

    link to this | view in chronology ]

    • icon
      Alien Rebel (profile), 6 Aug 2015 @ 2:11am

      Re: Smart Cars & Kids

      All very good questions for the A.I. lawyers and insurance bots to figure out.

      link to this | view in chronology ]

    • icon
      JoeCool (profile), 6 Aug 2015 @ 11:27am

      Re: Smart Cars & Kids

      Not sure about other brands, but Google autonomous cars record EVERYTHING that happens, which is why google has been able to show that every accident a google car has been in was the fault of the other car. I don't see any autonomous cars made without monitoring specifically for liability reasons.

      link to this | view in chronology ]

  • identicon
    Stephen, 6 Aug 2015 @ 2:25am

    Smart Cars & Speed Limits

    Smart cars are apparently going to be rigorous enforcers of posted speed limits, but what happens if you have a need to go beyond those limits. Obvious examples are ambulances, fire trucks, and police cars. Equally obvious is the case of a pregnant woman trying to get to a hospital to give birth in an ordinary but nevertheless fully autonomous car with no manual option.

    While one can readily foresee a special "override speed limit" button for ambulances, fire trucks, and police cars, will there be such an option for ordinary cars?

    Either way, how will the autonomous software be able to judge what speed it can safely speed at if it is no longer be able to use the posted speed limits for guidance?

    But that is not even tha half of it. Manual vehicles also swerve into an on-coming traffic lane to overtake a slower vehicle. While speeding ambulances et al might be able to assume everybody else will simply get out of their way, what about the pregnant mother? Will the autonomous software require the car to stay in its own proper lane behind a slow vehicle or will be an overtake option as well as a speeding option?

    (And then there is the most depressing consequence of our autonomous automotive future. Jason Bourne movies, James Bond flicks, and Fast & Furious 33 are going to be deadly boring if Our Heroes are going to be obliged by their autonomous driving nannies to invarably keep to the speed limit! :-(

    link to this | view in chronology ]

    • icon
      JoeCool (profile), 6 Aug 2015 @ 11:35am

      Re: Smart Cars & Speed Limits

      It is illegal for regular cars to break the law, even in case of emergency. Speeding, running lights, passing dangerously - all things people often do when going to a hospital can and will get them tickets, and can and do often lead to even worse accidents. It's safer for the people involved and everyone else on the road to abide by road regulations, even when hurrying to the hospital. Speeding, running lights, and passing dangerously will never save you more than just SECONDS off the total travel time in any case. People think that a car can be a time machine if you just drive recklessly enough, and that's just not the case.

      link to this | view in chronology ]

  • identicon
    Rekrul, 6 Aug 2015 @ 6:15am

    AI taking over...

    If they look like Mia and Niska, I say let them come. :)

    link to this | view in chronology ]

  • icon
    Mason Wheeler (profile), 6 Aug 2015 @ 7:29am

    If a child runs in front of an autonomous car, should the car swerve to avoid the kid?

    Of course! Why is this even a question?

    If a kid (or anything or anyone else) moves directly in front of my car and presents a collision hazard, I'll brake, swerve, or do whatever else is necessary to avoid a crash. That's obvious.

    link to this | view in chronology ]

    • icon
      sigalrm (profile), 6 Aug 2015 @ 9:54am

      Re:

      The problem isn't in the primary use case "kid runs in front of car".

      it's in the corner and edge cases: What happens if, in order to miss the child, you have to swerve into a group of children in front of a school? Or swerve off a cliff?

      link to this | view in chronology ]

      • icon
        Mason Wheeler (profile), 6 Aug 2015 @ 11:07am

        Re: Re:

        Well, in (highly unlikely) edge cases like that, there really is only one choice. It might sound cold, but it's the only decision that makes sense: the car must protect the safety of the people inside above all else.

        There are two reasons for this. First, if that wasn't the case, who would want to buy it? (Sad, but true.)

        Second--and this is even uglier, but it's a problem in the real world we live in today--is that it's a murder waiting to happen. If the car's programming had a built-in "sacrifice the people inside" code path, someone would find a way to hack the car, or fool its sensors somehow, and cause it to activate when it shouldn't.

        link to this | view in chronology ]

        • icon
          sigalrm (profile), 6 Aug 2015 @ 11:43am

          Re: Re: Re:

          I think we're in agreement on both points,

          Frankly, I'm just glad I'm not the engineer writing the code that makes the decisions.

          link to this | view in chronology ]

        • icon
          sigalrm (profile), 6 Aug 2015 @ 12:01pm

          Re: Re: Re:

          "First, if that wasn't the case, who would want to buy it? (Sad, but true.)"

          The snarky side of me is thinking that since it's software, there's technically no reason "accident avoidance preference" couldn't be remembered by the vehicle as a driver profile preference, in the same vein as mirror adjustment, seat position, steering wheel adjustment, etc.

          So, people who are willing to sacrifice themselves to save, e.g., a deer or a child could set it to the most "altruistic" setting, and sociopaths could set it to "maximum driver safety", with a variety of settings in between.

          Maybe throw in some external visual and/or audible indicators to give folks in cross walks an idea of what to expect from the vehicle, behavior wise (green indicator and Barney's "I love you" theme song means you're ok to enter the crosswalk, red indicator and Flight of the Valkyries means you might want to wait a few seconds), and couple it with a cellular tie-in to your car and life insurance companies so they can adjust your coverage levels and rates on the fly, and you're all set.

          link to this | view in chronology ]

  • icon
    sigalrm (profile), 6 Aug 2015 @ 9:51am

    Asimov himself

    Acknowledged that law 1 was flawed in one of his books (and I can't recall the title right now):

    As written: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

    The flaw was in the definition of "human being".

    link to this | view in chronology ]

    • icon
      JoeCool (profile), 6 Aug 2015 @ 11:41am

      Re: Asimov himself

      Actually, the MAIN flaw is how is "harm" defined? One of the robot rebellions was specifically started in order to prevent harm coming to humans by taking over to make certain humans couldn't do anything harmful to each other or themselves.

      Growth can be painful, and many lessons are learned through a smaller harm to avoid a much larger and more painful harm, which the law doesn't allow for. Most people also cherish freedom of choice, which the law also doesn't allow for as many choices are or may be harmful.

      link to this | view in chronology ]

      • icon
        sigalrm (profile), 6 Aug 2015 @ 12:04pm

        Re: Re: Asimov himself

        There were several flaws, and it fascinating to see how he adjusted them over the course of his writing.

        Damnit. Now I need to go reread the Robot Series and the Foundation books again.

        link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.