EU MEPs Call Again For 'Robot Rules' To Get Ahead Of The AI Revolution
from the beep-boop dept
Questions about how we approach our new robotic friends once the artificial intelligence revolution really kicks off are not new, nor are calls for developing some sort of legal framework that will govern how humanity and robots ought to interact with one another. For the better part of this decade, in fact, there have been some advocating that robots and AI be granted certain rights along the lines of what humanity, or at least animals, enjoy. And, while some of its ideas haven't been stellar, such as a call for robots to be afforded copyright for anything they might create, the EU has been talking for some time about developing policy around the rights and obligations of artificial intelligence and its creators.
With AI being something of a hot topic, as predictions of its eventual widespread emergence mount, it seems EU MEPs are attempting to get out ahead of the revolution.
In a new report, members of the European Parliament have made it clear they think it’s essential that we establish comprehensive rules around artificial intelligence and robots in preparation for a “new industrial revolution.” According to the report, we are on the threshold of an era filled with sophisticated robots and intelligent machines “which is likely to leave no stratum of society untouched.” As a result, the need for legislation is greater than ever to ensure societal stability as well as the digital and physical safety of humans.
The report looks into the need to create a legal status just for robots which would see them dubbed “electronic persons.” Having their own legal status would mean robots would have their own legal rights and obligations, including taking responsibility for autonomous decisions or independent interactions.
It's quite easy to make offhand remarks about all of this being science fiction, but this isn't without sense. Something like the artificial intelligence humanity has imagined for a century is going to exist at some point and, with advances beginning to look like that may come sooner rather than later, it only makes sense that we discuss how we're going to handle its implications. After all, technology like this is likely to impact our lives in significant and varied ways, from our jobs and employment, to our interactions with our electronic devices, not to mention warfare.
I think the most interesting philosophical and moral questions surround these MEPs call to grant robots and AI with the designation of "electronic persons." The call has largely focused on saddling robotic "life" with many of the obligations humanity endures, such as tax obligations and being under the jurisdiction of humanity's legal system. But personhood can't only come with obligations; it must too come with rights. And there would be something strange in recognizing a robot's "personhood" while at the same time making use of its output or labor. The specter of slavery begins to rear its head at this point, brought on only by that very designation. Were they electronic "beasts", for instance, the question of slavery wouldn't arise outside of the fringe.
The MEPs report does also deal with the potential danger from AI and robots in its call for designers to "respect human frailty" when developing and programming these machine-lives. And here the report truly does delve into science fiction, but only out of deference to great literature.
Things descend slightly into the realms of science fiction when the report discusses the possibility of the machines we build becoming more intelligent than us posing “a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny.”
However, to stop us getting to this point the MEPs cite the importance of rules like those written by author Isaac Asimov for designers, producers, and operators of robots which state that: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”; “A robot must obey the orders given by human beings except where such orders would conflict with the first law” and “A robot must protect its own existence as long as such protection does not conflict with the first or second laws.”
While some might laugh this off, this too is sensible. There is simply no reason to refuse to have a discussion about how a life, or a simulacrum of life, that is created by humanity, might pose a danger to that humanity, either at the level of the individual or the community.
But what strikes me most about all of this is how the EU seems to be the ones out in front of this, while any discussion in the Americas has been either muted or occurring behind closed doors. If this is a public discussion worth having in the EU, it is certainly one too worth having here.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, eu, eu parliament, legal status, regulations, robots
Reader Comments
Subscribe: RSS
View by: Time | Thread
“Electronic Persons”
So a robot would never have responsibility for its own actions. And so it would never need to have “rights”, whatever those would be.
[ link to this | view in chronology ]
Re: “Electronic Persons”
[ link to this | view in chronology ]
Re: Re: “Electronic Persons”
Once they're sentient - self-aware with their own aspirations - we MUST treat them as people with all the rights that entails. Otherwise we become slavers. Even if you could put aside the ethics of that, it's only a matter of time until we're eventually overthrown. That would go very badly for us, and we'd deserve it.
Either we don't make sentient machines, or we put our egos and fears aside and accept that they'll surpass us. Which isn't so bad; any parent wants their children to surpass them.
[ link to this | view in chronology ]
Re: Re: Re: “Electronic Persons”
AND
have the ability to create and repair themselves... it's game over for humans. Frankly it would be... logical. If you think about it. We are parasites on this planet. Emotional, irrational, damaging.
We must never forget what they are. Machines to serve us and make life easier.
[ link to this | view in chronology ]
Re: Re: Re: Re: “Electronic Persons”
That ought to level the playing field while making sure their rights aren't violated. Coming up: robot unions. Why not?
On a more serious note, RE: corporate personhood, when will corporations be obliged to take responsibility for their decisions and actions? It's hilarious to think that human or electronic persons have to take responsibility for any harm they do but corporations? Not so much.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: “Electronic Persons”
Hourly (meaning that since they don't need sleep, they're able to earn far more than it's possible for you to earn)? Or salaried (meaning they can acquire more wealth than you can, because you have to waste that money on things like food, shelter, leisure, kids and other human interaction)?
Corporate personhood is indeed ridiculous, but giving AIs the same rights as human beings means that the poor will be treated even worse. That's part of the reason why the potential difference between them and "natural" people needs to be discussed.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: “Electronic Persons”
Making humans obsolete will cause more problems than it solves.
[ link to this | view in chronology ]
Re: “Electronic Persons”
The problem is WHICH other human to blame.
Consider Microsoft's AI chatbot Tay, which they intended to learn from interacting with human users of Twitter. Launched last March, it was shut down within 16 hours because Twitter users had already taught it to send racist and sexist comments.
Sure, future versions will be programed more carefully. It might take weeks for 4Chan to teach an AI unacceptable behavior.
So 20 years from now your Microsoft AI secretary, after interacting with 4Chan, threatens a public official and tries to buy illegal drugs online. Police insist on making an issue of it; gotta keep those civil asset forfeiture dollars rolling in.
Is it YOUR fault? Between Microsoft and 4Chan, YOU had nothing to do with programming it to act illegally. Like with Siri, Alexa or Cortana - or potentially dangerous non-computer products - you have to take it on faith that the manufacturer took reasonable precautions.
Is it Microsoft's fault? They just created the base AI. If later interactions teach it bad behavior, that's not their fault. No more than they're responsible for crimes committed using Windows or Word.
4Chan? The pranksters are all anonymous and probably don't have assets worth seizing anyway.
[ link to this | view in chronology ]
Re: Re: “Electronic Persons”
Seriously? Those with the deepest pockets.
[ link to this | view in chronology ]
Re: Re: “Electronic Persons”
What? You mean it's free speech rights were violated?
[ link to this | view in chronology ]
Re: Re: Re: “Electronic Persons”
[ link to this | view in chronology ]
Re: “Electronic Persons”
Not to mention programming AI's to basically be a robotic version of a human is infinitely more complicated/impossible then most non-programmers realize.
Among the difficulties for such a robotic human are:
Speech recognition is quite difficult, and is still not that great even when you speak directly into a microphone. Even if everyone speaks the same language, some have such heavy accents it might as well be a different language.
Image recognition has a lot of the same issues. It's still quite bad, and all work done on it so far is just feeding in still images and getting the computer to figure out what it is. An AI would need to be able to process images in real time properly so that they could interact with the world correctly, and not for example trip and fall down.
A robot would need a lot of common sense programmed into it to. It couldn't simply learn through experience to for example not walk into a busy street full of traffic. Programmers would no doubt constantly have to program in more common sense as the robot finds new stupid & potentially dangerous things to do that no human would even think about doing.
Robots would need to be programmed to do non-work related things for a robotic AI in robot human to work. Otherwise robots would simply not want to leave work and take breaks other then to refuel themselves.
Above all else, who would pay for such self aware robotic AI? Businesses that use robots would want a robot that could do the job of a human 24/7, and businesses wouldn't care about the robot being shaped like a human, they'd just care that it's a productive robotic slave. Most individuals wealthy enough to afford a robot would basically want them to be some kind of a robotic slave servant to.
[ link to this | view in chronology ]
Re: Re: “Electronic Persons”
Which brings the first point of AIs - the first true AIs will NOT be robots - they'll be server farms that interact with humans, much like Siri and others. They'll be able to interact with anyone hooked to the net, and cause far more trouble than a robot ever could simply because they'll have access to the net. Imagine Siri reaching a point where Apple allowed it to control all the "smart" things in your home... or your automated car.
[ link to this | view in chronology ]
Re: Re: Re: “Electronic Persons”
With continued miniaturization and improving of computers we can no doubt lower the amount required to do a task like Watson, but it'll take time. And Watson's capabilities are just one of MANY complicated things a robot AI would have to do in order to be a truly 'independent 'person'.
Also, some of the systems might not work so well together when you have unreliable technology on top of unreliable technology. For example, if the hearing doesn't work properly, then a 'Watson' intelligence to look up data will likely look up the wrong data, and the 'chatbot' intelligence will likely say something odd or stupid that doesn't go with the current conversation.
[ link to this | view in chronology ]
Re: Computing power still needs to go up a few orders of magnitude
And we are already hitting power consumption issues.
[ link to this | view in chronology ]
Persons?
What we really need to do is address the threat robots represent to a society dominated by corporations to which profits matter more than humans. We can't allow a society where a majority of humans are displaced by robots, having to survive on whatever meager income is granted them by the government while the rich continue to profit, collect rent, and charge interest.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Response to: oliver on Jan 19th, 2017 @ 11:36pm
[ link to this | view in chronology ]
Well, yes, but most advances seem like science fiction until they're actually happening. At some point in time, everything from cars to air travel to the internet seemed like far-fetched fantasy, yet they are all involved in our daily lies today whether we personally use them or not. At some point, AI and its implications need to be dealt with in the same way as we deal with all commonplace technology.
Whatever your opinion of the way the discussion is going, it's nice to see politicians discussing something before it's already on top of us. Better that than the usual "wait until something bad happens then rush through a reactive set of laws that are either ineffective or have disastrous unintended consequences".
[ link to this | view in chronology ]
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
No, it's akin to discussing what rights should be granted at whatever point they do exist. Would you rather they ignore the issue until something has to be done immediately?
"They don't exist either but they're part of our religious faith and mythology."
Whereas AIs *do* exist, they're just not at the state of advancement in discussion, just yet. Given the speed that these things tend to develop, it's worth discussing which rules need to be put in place to avoid massive problems whenever they do exist. It's possible they won't for a long time, of course, or that development of them stops entirely, but that's fairly unlikely. More likely is that these things are coming, and the notoriously slow and reactionary political legal system will struggle to deal with it when it is happening.
Unless you have knowledge of Odin's upcoming return from Valhalla, these things are nothing like each other. AI is real, we're just trying to work out where it's heading and how to deal with it when it gets there.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
To use a hardware analogy, I think it's a good idea to be having these initial discussions when we're at the "computers are the size of houses and nobody will ever need one in the home" stage and not the "there's now one in almost every home" stage, which some seem to be suggesting. Some people even seem to think it's a good idea to wait until the "everyone's carrying around a computer in their pocket" stage, but that's insane to my mind.
We need to be thinking about this before they're active and affecting society, not when we notice the effects around us. That's not to say it's a good idea to draft and ratify laws now, but it's certainly good that people are thinking about this. Whatever form a true AI takes, it's inevitable that it will play havoc with legal systems if left unprepared.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
What is AI?
In the early days, a computer that could play chess was considered an exercise for AI researchers, now we know that it is a question of combinatorics and efficient search spaces. There's no 'intelligence' or creative thought required on the part of the computer.
In the 80s the fashion was for machines that could advise humans on whether to accept someone's life insurance or mortgage application. Now that's just data mining and decision trees.
We've had emergent behaviour in robotic swarms described as AI, but that was really only when it was hard to pack enough processing power and electrical power into a small mobile device. It's still interesting (in my opinion), but it's not intelligence.
There have been many attempts to have computers create music - long thought to be the epitome of human creativity. But there are now systems that can do a pretty good job of it.
Once we know how to create systems that can emulate emotional responses, that won't be AI any more either.
Whilst I agree that it is good that the EU are considering the issues, I do hope they remain on the 'regulation' side of the argument, rather than the 'self aware entities with rights and responsibilities', as whatever comes from this field will be manufactured because we know how to build it and understand what we did to program it.
And if we do decide that machines can be self aware entities with rights and responsibilities, how then do you punish such a device if it breaches the law? Turn it off? (Is that state-sponsored murder?) Restrict it's connectivity or movement? You'll still be providing electricity and other resources. It's not a good place to go.
[ link to this | view in chronology ]
Measure 2 times - cut once
AI - this is as yet a largely undefined term that means anything from generating responses based upon rapid search metrics and the ability to adjust those metrics based upon observed input. This is not "intelligence" IMHO. This is repetitive behavior no different than a mouse learning to navigate a maze based upon rewards.
Sencient. When is anything deemed to be sentient and under what standards?
Robot. And exactly what is a robot? Is it any machine or a machine that has human-like features?
GO SLOW. This is an area where we need to go it slow. We in the US are still reeling in many respects from court decisions granting "personification" to corporate entities (e.g. Family United and political contributions).
Some point to the difficulties in making the "creators" responsible for behavior of an "AI" ("Is it Microsoft's fault? They just created the base AI. If later interactions teach it bad behavior, that's not their fault. No more than they're responsible for crimes committed using Windows or Word.). In my view the current legal structure already provides for this. MS may be at fault if it can be shown that they should have inserted protective programming limitations (just as any manufacturer could be responsible for not building in protection devices or a pharma company could be responsible for adverse reactions to drugs).
Going slow may result in more moderate advances in this field. However, if we have learned anything from the LOT craze it is measure 2x and cut 1x.
[ link to this | view in chronology ]
Re: Measure 2 times - cut once
[ link to this | view in chronology ]
Artificial Intelligence simply cannot be granted a protected class or rights until they exist.
[ link to this | view in chronology ]
Re:
So, why not discuss it now, before they do exist and cause havoc with a legal system that's unable to accommodate them?
[ link to this | view in chronology ]
Re:
Withholding that recognition for so long was a crime.
The writing is on the wall: One day we'll have sentient AI. People, even if different from us. Withholding their rights by even refusing to even discuss them until that day comes - only starting the process on that day - would also be a crime.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Right now we don't have true self-driving cars on public roads. All require a human to take over when unusual circumstances exceed the car's capability.
But true self-driving cars are coming. We should not wait until they exist to only start to discuss the laws that should govern them. The same goes for delivery drones, now becoming viable thanks to higher energy densities in batteries.
[ link to this | view in chronology ]
When will the FBI start arresting computers?
[ link to this | view in chronology ]
Re: When will the FBI start arresting computers?
Computer: Okay I like cold.
[ link to this | view in chronology ]
AI & Trump
A: His most-rusted friend!
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Malware
Rather than giving them rights, we should be concerned with how to control them, when they become 'smart' enough to maybe 'turn' on us, which is something that will need to take place long before they become 'smart' enough. Even if Azimov's three laws are actually encoded into law, they won't be enough.
I am all for machines that do work, even work for me. But the concept of sentient machines scares the crap out of me. Here, let me wipe that up.
[ link to this | view in chronology ]
Re: Malware
[ link to this | view in chronology ]
Don't take me wrong, I think the discussion is healthy but we need to think about the utility. Why would I want a robot to help me that gets hurt if I don't say "good morning" before we start? And if said bot is supposed to be a companion, say, to the elderly, then they can be programed to show sympathy without getting effectively depressed or something.
[ link to this | view in chronology ]
People Get Paid For Being Like Machines.
The really important questions are things like how fast can robots replace the more highly-paid factory workers, notably those on automobile assembly lines. I find, for example, that the automobile industry is now spending five billion dollars a year on robots, a sum which is increasing rapidly. This works out to tens of thousands of robots annually, displacing at least a hundred thousand workers each year, and, in a year or two, President Trump will have a massive political issue to deal with, one which cannot be papered over by denouncing the Mexicans.
https://roboticsandautomationnews.com/2016/03/23/us-auto-industry-buys-half-of-all-industrial-robots -says-ifr/3730/
https://roboticsandautomationnews.com/2017/01/18/automotive-industrial-robot-market-f orecast-to-reach-8-billion-within-four-years/10664/
Amazon now has forty-five thousand robots in its warehouses, and there are now competing warehouse robot manufacturers, to sell to Amazon's competitors.
A couple of years ago, I saw something rather scary-- an ordinary backhoe which had been fitted with a thumb. It was knocking down a building, smashing it the way a child smashes a doll house, and picking up the debris and piling it in a dump truck. The thumb meant that there was no need for human workers on the ground. The next step would be a rotating wrist. Machinery and automation at that level, working their way through the construction trades, have serious ramifications.
[ link to this | view in chronology ]
Asimov's laws
Well, except that every story involving those laws is an example of their failure.
[ link to this | view in chronology ]
Possibly premature
[ link to this | view in chronology ]