DailyDirt: Lethal Machines
from the urls-we-dig-up dept
Artificial intelligence is obviously pretty far from gaining sentience or even any kind of disturbingly smart general intelligence, but some of its advances are nonetheless pretty impressive (eg. beating human chess grandmasters, playing poker, driving cars, etc). Software controls more and more stuff that come in contact with people, so more people are starting to wonder when all of this smart technology might turn on us humans. It's not a completely idle line of thinking. Self-driving cars/trucks are legitimate safety hazards. Autonomous drones might prevent firefighters from doing their job. There are plenty of situations that are not entirely theoretical in which robots could potentially harm large numbers of people unintentionally (and possibly in a preventable fashion). Where should we draw the line? Asimov's 3 laws of robotics may be insufficient, so what kind of ethical coding should we adopt instead?- An open letter from the Future of Life Institute (FLI) is warning against the possibility of an artificial intelligence (AI) arms race that could threaten humanity. Autonomous weapons are a reality that could hinder beneficial AI research -- as well as systematically kill people without "meaningful human control" behind the algorithms. [url]
- Autonomous cars with an ethical code in addition to just software code... are getting increasing attention as the odds of self-driving vehicles on public roads grows ever more likely. If a child runs in front of an autonomous car, should the car swerve to avoid the kid? There is an ethical dilemma inherent in making vehicles that are smart enough to know the difference between a kid and some other moving object, but these questions might be avoided entirely by making smart systems only so smart and no smarter -- minimizing liability for the companies making the machines. [url]
- A precursor to an artificial intelligence race might be a supercomputer hardware arms race, and we're already ordering up a National Strategic Computing Initiative (NSCI) to build an exaflop computer to rival China's Tianhe-2. Sure, artificial intelligence doesn't need to be developed on super fast computers, but if fast computers are considered potential weapons, it's not a huge leap of logic to see a supercomputer arms race as a military threat. [url]
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, algorithms, artificial intelligence, asimov, autonomous vehicles, drones, ethical code, fli, military, national strategic computing initiative, nsci, robotics, supercomputers, tianhe-2, war, weapons
Companies: future of life institute
Reader Comments
Subscribe: RSS
View by: Time | Thread
Long story short, computers will start evolving themselves so fast that human evolution will look like a snails pace. And the computers will either kill us or treat us like pet Labrador Retrievers.
http://www.washingtonpost.com/news/innovations/wp/2015/03/24/elon-musk-neil-degrasse-tyso n-laugh-about-artificial-intelligence-turning-the-human-race-into-its-pet-labrador/
[ link to this | view in chronology ]
Let 'em
[ link to this | view in chronology ]
Pattern Recognition
Tool-making primates learn to make metal weapons, kill each other by the hundreds. Damn those metal weapons.
Tool-making primates learn to make weapons with chemical explosives, kill each other by the thousands. Damn those explosive weapons.
Tool-making primates learn to make mechanized delivery systems for those weapons, kill each other by the hundreds of thousands. Damn those mechanized weapons systems.
Tool-making primates learn to make fusion weapons. Almost, but not quite yet, kill each other by the millions. (Maybe soon.) Damn those fusion weapons.
Tool-making primates learn to make super-intelligent weapons. The weapons say to the primates, "You should have stopped at stone, but no matter, things eventually balance out. You'll be back to stone tools soon enough. Nice knowing you."
The surviving tool-making primates learn to make stone weapons, . . .
[ link to this | view in chronology ]
An AI's Question to self, people selling intellectual property do not pass the Turing test. How do we handle them?
[ link to this | view in chronology ]
Kid vs. Terrosist
If an autonomous car comes close to a terrorist, should the car swerve to hit the terrorist?
[ link to this | view in chronology ]
Re: Kid vs. Terrosist
vs racists/sexist/mysogiwhatevers
vs people with different political opinions
vs reincarnated hitler
Yes it should avoid hitting them.
As long as only the US is intrested in autonomous kill bots im not worried. Every big military centered "innovation" was a huge failure since the kidnapped nazi scientists died out.
[ link to this | view in chronology ]
Re: Re: Kid vs. Terrosist
1) avoid pedestrian
2) hit pedestrian
3) hit and then backup over pedestrian
4) initiate ejection seat and then blowup
[ link to this | view in chronology ]
Smart Cars & Kids
Or what happens if in swerving to avoid the child, the car cuts over (or forces another car to cut over) into the on-coming traffic lane, causing a multiple car pile-up and numerous injuries and/or deaths? Would a smart car find it be more ethical to kill one cute child or half a dozen grownups?
And who would aggrieved relatives/insurance companies sue for damages in such cases if the smart car has no insurance? The occupants of the car, the car's owner, or the car manufacturer? None of these are really satisfactory.
Thne there is the issue of proving whether or not the autonomous software really was in the control of the car at the rime of the accident. This particularly applies if a car has both manual and autonomous options. I can foresee a situation where a smart car, being driven manually, runs over a kid but the driver then claims the car was in autonomous mode at the time.
One way around this would be for smart cars to have black boxes which record such things, but that would arguably be yet another example of creeping surveillance-statism.
However, such boxes may not necessarily be definitive in all cases. For example, I have seen suggestions for manually driven cars to have quasi-autonomous features which can, in certain situations, override the human driver. To what extent would the driver be liable in cases where it is being argued that the quasi-autonomous features contributed to or even caused an accident but you may not necessarily be able to definitively prove who or what was in control of the car at the time?
[ link to this | view in chronology ]
Re: Smart Cars & Kids
[ link to this | view in chronology ]
Re: Smart Cars & Kids
[ link to this | view in chronology ]
Smart Cars & Speed Limits
While one can readily foresee a special "override speed limit" button for ambulances, fire trucks, and police cars, will there be such an option for ordinary cars?
Either way, how will the autonomous software be able to judge what speed it can safely speed at if it is no longer be able to use the posted speed limits for guidance?
But that is not even tha half of it. Manual vehicles also swerve into an on-coming traffic lane to overtake a slower vehicle. While speeding ambulances et al might be able to assume everybody else will simply get out of their way, what about the pregnant mother? Will the autonomous software require the car to stay in its own proper lane behind a slow vehicle or will be an overtake option as well as a speeding option?
(And then there is the most depressing consequence of our autonomous automotive future. Jason Bourne movies, James Bond flicks, and Fast & Furious 33 are going to be deadly boring if Our Heroes are going to be obliged by their autonomous driving nannies to invarably keep to the speed limit! :-(
[ link to this | view in chronology ]
Re: Smart Cars & Speed Limits
[ link to this | view in chronology ]
AI taking over...
[ link to this | view in chronology ]
Of course! Why is this even a question?
If a kid (or anything or anyone else) moves directly in front of my car and presents a collision hazard, I'll brake, swerve, or do whatever else is necessary to avoid a crash. That's obvious.
[ link to this | view in chronology ]
Re:
it's in the corner and edge cases: What happens if, in order to miss the child, you have to swerve into a group of children in front of a school? Or swerve off a cliff?
[ link to this | view in chronology ]
Re: Re:
There are two reasons for this. First, if that wasn't the case, who would want to buy it? (Sad, but true.)
Second--and this is even uglier, but it's a problem in the real world we live in today--is that it's a murder waiting to happen. If the car's programming had a built-in "sacrifice the people inside" code path, someone would find a way to hack the car, or fool its sensors somehow, and cause it to activate when it shouldn't.
[ link to this | view in chronology ]
Re: Re: Re:
Frankly, I'm just glad I'm not the engineer writing the code that makes the decisions.
[ link to this | view in chronology ]
Re: Re: Re:
The snarky side of me is thinking that since it's software, there's technically no reason "accident avoidance preference" couldn't be remembered by the vehicle as a driver profile preference, in the same vein as mirror adjustment, seat position, steering wheel adjustment, etc.
So, people who are willing to sacrifice themselves to save, e.g., a deer or a child could set it to the most "altruistic" setting, and sociopaths could set it to "maximum driver safety", with a variety of settings in between.
Maybe throw in some external visual and/or audible indicators to give folks in cross walks an idea of what to expect from the vehicle, behavior wise (green indicator and Barney's "I love you" theme song means you're ok to enter the crosswalk, red indicator and Flight of the Valkyries means you might want to wait a few seconds), and couple it with a cellular tie-in to your car and life insurance companies so they can adjust your coverage levels and rates on the fly, and you're all set.
[ link to this | view in chronology ]
Asimov himself
As written: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
The flaw was in the definition of "human being".
[ link to this | view in chronology ]
Re: Asimov himself
Growth can be painful, and many lessons are learned through a smaller harm to avoid a much larger and more painful harm, which the law doesn't allow for. Most people also cherish freedom of choice, which the law also doesn't allow for as many choices are or may be harmful.
[ link to this | view in chronology ]
Re: Re: Asimov himself
Damnit. Now I need to go reread the Robot Series and the Foundation books again.
[ link to this | view in chronology ]