Should Robots Get Rights?
from the be-kind-to-skynet dept
I've written before about robots when someone imagines them rising to the level of cognition. Usually these stories are filled with luddite fear of the coming robot apocalypse. This time, however, let's take a quick trip down a robotic philosophical rabbit hole.Computerworld has a story questioning whether or not the robots that will be increasingly life-like and ubiquitous in our lives will attain the kind of rights we afford animals.
Imagine that Apple will develop a walking, smiling and talking version of your iPhone. It has arms and legs. Its eye cameras recognize you. It will drive your car (and engage in Bullitt-like races with Google’s driverless car), do your grocery shopping, fix dinner and discuss the day’s news.
But will Apple or a proxy group acting on behalf of the robot industry go further? Much further. Will it argue that these cognitive or social robots deserve rights of their own not unlike the protections extended to pets?If you're like me, your gut reaction may have been something along the lines of: of course not, idiot. But the article actually raised some interesting questions, based on a paper by MIT researcher Kate Darling.
The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.Now, this, to me, makes a bit of sense save for one detail. Yes, our values are reflected in the way we treat some animals, but there seems to be a vast difference between organic life and cognitive devices. Robots, afterall, are not life, or at least not organic life. They are simulations of life. This is, of course, where the rabbit hole begins to deepen as we have to confront some tough philosophical questions. How do you define life? If at some level we're all just different forms of energy, is the capacity to think and reason enough to warrant protection from harm? Can a robot be a friend, in the traditional sense of the word?
But, putting aside those questions for a moment and assuming robots do attain some form of rights and protection in the future, this little tidbit from the article made me raise my eyebrows.
Apple will patent every little nuance the robot is capable of. We know this from its patent lawsuits. If the robot has eyebrows, Apple may file a patent claiming rights to “a robotic device that can raise an eyebrow as a method for expressing skepticism.”Here's where we may find commonality with our metallic brethren. With the expanded allowance for patenting genes, it becomes all the more likely that the same codes that manufacture our humanity could indeed be patented in the way that a robots manufactured "humanity" would be. If robotics progresses to produce something along the lines of EDI, the very things that make her "human" enough to be worthy of rights will be locked up in an increasingly complicated patent system. And, with our courts falling on the side of gene patents for humans, we've virtually ensured that all of that robotic humanity will indeed be patentable.
On the other hand, what happens if future courts rule that human genes cannot be patented? And then what happens if we do indeed define some kind of rights structure for our robotic "friends"? Do those rights open up the possibility that robotic "genes" should not then be patented?
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Reader Comments
Subscribe: RSS
View by: Time | Thread
NO!
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Um no it doesn't. You can't be cruel or nice to a machine. They don't have feelings, they don't suffer or feel pleasure. There is nothing there to evoke empathy. Anthropomorphizing machines is not healthy.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Seriously, though, robots should have rights the moment they start to care.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
The thing is, I dont want my child to beat the crap out of a "realist" robot pet... or child.
That actions will be buried on their minds.
[ link to this | view in chronology ]
Re: Re: Re:
Not encouraging children to play out bad behavior is not the same thing as acknowledging robot rights.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Then no.
Would this be the current bomb-disarming robot, which is a glorified R/C tool? Then why not?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
It's another thing to have one that's intuitive, notices clues on its own, etc. The sort of thing that only an intelligence could do. That would make a decidedly better bomb squad tool?
So that, "Would you send it in to risk its life?" is then a parallel question that serves to separate the two issues that latin angel was confounding.
That was my point.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
yes...
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Robot Rights
When building a robot from the ground up the maker has virtually unlimited choices. Do you want a robot to muck out sewers, then don't give them a sense of smell. If you want a robot to follow orders then just program it so that it's greatest desire is to obey a humans every command.
If you want a slave with no rights then don't give the robot a desire for those rights. Make them enjoy living in servitude.
Why would we need to give robots rights if we make them without the capacity for that desire.
[ link to this | view in chronology ]
Re: Robot Rights
Granting rights to these individuals, and to any 'race' that arises from these electronic 'mutations' is something we need to give serious thought to.
[ link to this | view in chronology ]
why bother?
Oh, you say we do? Think so? Go survey the world on that a bit. Hell, just start with the supposedly enlightened nations and see just how limited those inalienable rights are.
[ link to this | view in chronology ]
With Rights Come Responsibilities
The only humans who get rights without responsibilities are children. They get those rights because they are expected to grow into mature adults someday, whereupon they assume the full responsibilities, along with the full rights to independent action, of an adult.
Animal rights don’t make sense on this basis, because animals will always remain animals, they can never take on the full responsibilities of a mature human adult.
In the same way, robot rights don’t make sense for present-day robots. If future robots become smart enough to be difficult or impossible to distinguish from mature human adults, then that becomes a different matter...
[ link to this | view in chronology ]
Re: With Rights Come Responsibilities
But part of the point of the article isn't human level robots, but pet level robots. Would it be cruel to kick a robotic cat if it was a walking, meowing, thinking cat? If it was truly an AI of a cat brain, should it not be treated with some care?
These are the hypothetical questions being asked. And how we answer those questions when AI comes around will determine if we have a robot apocalypse or plastic palls who are fun to be with.
[ link to this | view in chronology ]
Re: Would it be cruel to kick a robotic cat if it was a walking, meowing, thinking cat?
Remember in The Hitchhiker’s Guide To The Galaxy, there was intelligent cattle, bred specifically to enjoy being killed and eaten?
[ link to this | view in chronology ]
At a fundamental level, a human being is a very advanced supercomputer powered by carbon-based circuits and fueled by oxygen, as opposed to our current computers with silicon circuits which are fueled by electrons. Science teaches us that a single-celled self-replicating bacterium is alive. Even a virus, which contains little more than instructions to reproduce encoded into chemicals, is considered alive.
By that definition, a modern computer virus could certainly be considered alive. Siri is not that far from passing a Turing test. Combine the two, and you have a dilemma on your hands.
Inevitably, computers will become smarter than people, more capable, more efficient. That includes the ability to feel and to think. A computer AI housed in a humanoid body created by a human (or by another computer) will be no different than a baby's intelligence, housed in a frail human form, born from his mother. Just like a baby, the computer will learn, and grow, and adapt.
It's not unreasonable that in our lifetime, we will have to answer the question asked here as a hypothetical, but under very real circumstances, in a congress or parliament, or in a court of law.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Understanding
"The difference between ramen and varelse is not in the creature judged, but in the creature judging. When we declare an alien species to be ramen, it does not mean that they have passed a threshold of moral maturity. It means that we have."
I'm DEAD CERTAIN that I DON'T get how that applies to robots. So screw 'em.
http://en.wikipedia.org/wiki/Concepts_in_the_Ender%27s_Game_series#Hierarchy_of_Foreignness
[ link to this | view in chronology ]
Re: Understanding
The whole point of the Hierarchy is that we are advanced enough to treat a being as a mindless animal, a hated enemy, or another being to be understood - even if kept at a (safe) distance. So we treat animals according to a hierarchy already, as we do humans - and we would aliens. So why not robots? Just like most people don't worry about a fish's rights, they probably shouldn't worry about the average car machine-line robot.
However, even if a robot isn't self-aware or requesting rights, it 'de-humanises' us to treat it like garbage, and teaches those around us to do so too. Respect begets respect. For more to think about, there's the Broken Windows Theory.
[ link to this | view in chronology ]
Yes, because...
When this day comes, we have to assume we will either have already, or will soon thereafter develop the ability to map a fully developed human brain, and between these two technologies, the inevitable will happen - humans will BECOME robots.
This has myriad benefits. Instant communication across the galaxy, with 100% privacy control. The ability to share emotions directly, not just language. The ability to disconnect our minds from our form. Bored being a biped? Fine, upload yourself into a rocket or airplane or submarine body and go exploring. We won't need homes. We won't need food. Nor sleep. Nor even air. As long as we can get within proximity of a star to recharge our batteries, we're golden. And when we feel like being around others? Simply connect to the central server and commune with everyone else in existence because we have finally achieved the ULTIMATE form of humanity - raw data.
So yes, we need robot rights, because one day I intend to be one, and I'll be damned if I'm going to wind up as some meatbag's bitch.
[ link to this | view in chronology ]
Re: Yes, because...
[ link to this | view in chronology ]
Re: Yes, because...
[ link to this | view in chronology ]
What humanized robots can look forward to;
1) Neuroses
2) The robot RIAA
3) Being marginalized by the robot government
4) Taxes
5) Fighting for their robot rights
6) Being forced to kill all humans by the robot overmind
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Probably in the far future ...
In the meantime, we will very soon have robots that appear to be human and to have human thoughts and feelings. Many people will be very happy with this, some may react violently.
If 10,000,000 young children believe that their kiddy-bots are alive, what to do if people begin to smash them up publicly?
[ link to this | view in chronology ]
It depends
Should a large scale MolSID (Molecular Scale Integration Device - Nanotech) device containing a human or human like intelligence be given rights? The answer is yes.
There are so many other questions that also need to be answered.
- Who is responsible when a programming glitch makes all Google cars run amok and kill people?
- Should the above event lead to civil or criminal charges?
- If you delete the last back up of a human mind is that murder?
- If you delete an AI that has human like intelligence is that murder?
- If you have a back up of your mind, can the police get a search warrant to go through it?
- How do you handle copyright on music, video, and books in backups of human minds?
[ link to this | view in chronology ]
One of my favourite openings to the show was this:
"You ask why we give our warships emotion? Would you really want a ship incapable of loyalty?
Or of love?"
Of course, the flipside is also true and on several occasions problems are created by an AI deciding it doesn't feel like playing nice any more. Slippery slope, the whole AI deal.
[ link to this | view in chronology ]
um, i've got a better question...
art guerrilla
aka ann archy
eof
[ link to this | view in chronology ]
Robot Rights
No. The question is stupid, and lowers (or raises?) political correctness to an insane level. Get real.
[ link to this | view in chronology ]
Re: Robot Rights
[ link to this | view in chronology ]
Freedom is defined by the option to disobey...
Let's assume the Genesis account in the bible is entirely literal. God creates Adam 1.0, God tells Adam here is the walled garden - you can do anything you like in this, I'm even going to let you name everything.
God has created effectively a sandbox for a program to run in and grow and learn. But God was not satisfied with just having a machine with no intelligence, therefore he introduces the Tree of Source Code. He then tells Adam that he can do anything he likes in the walled garden, but cannot touch the Tree of Source Code, or Adam 1.0 will surely be obsolete.
God forks Adam 1.0 into Eve Beta. Eve interacts with the trojan Snake virus and we have eventually both Adam and Eve choosing to disobey their original makers programming.
The reality is, God, didn't need to put the Tree of Life in the Garden - his creations could have happily lived and evolved inside the sandbox with no ability to develop outside of his original programming. By putting the Tree of Life into the Garden, he created an opportunity for Adam and Eve to exercise free will in obeying or disobeying the instructions of their maker.
This is why I roll my eyes when people seem to think it's just a matter of 'programming' Asimov's 3 rules. If we apply this analogy to robots, then assuming we will even manage to get as far as reproducing a robot as nuanced as a human being, we'd have to program it to have a choice in whether it would attack or kill us. We'd have to give it a real choice to disobey - otherwise they will always be 'slaves'.
I personally don't think we will go this direction. Mark Kennedy once said:
We tend to invent to fulfill a purpose or function. We don't program mobile phones not to kill humans because mobile phones are practically unable to kill humans unassisted. Same as we don't program it into our printers, computers, TV's, cars, planes.
Robots will be invented to fulfill functions and purposes. The military will use them to kill civilians and combatants in far off middle eastern countries, the red cross will use them to pull people from rubble or administer basic first aid in war zones. But we'll never see a military robot become a conscientiousness objector because they won't be given that programming. We'll never see a first aider robot decide this person isn't worth saving.
Finally check out Big Dog - https://www.youtube.com/watch?v=W1czBcnX1Ww It literally scares the shit out of me that this is what could be chasing people in the future - whether for war, policing, bounty hunting. Look at how the scientist slams his boot into the side of it - if that was a horse or a person we'd be horrified. Big Dog is built for a purpose - not for love or affection.
Personally it makes me want to learn how to quickly disable these things or evade them.
[ link to this | view in chronology ]
Re: Freedom is defined by the option to disobey...
Well, the lawnmower chant music would certainly make that easier.
[ link to this | view in chronology ]
Robot rights
[ link to this | view in chronology ]
As long as they get what we get
[ link to this | view in chronology ]
Declaration of Independence
It's simple, whoever creates the robots should get to decide what the rules and limitations regarding their treatment should be.
[ link to this | view in chronology ]
Re: Declaration of Independence
[ link to this | view in chronology ]
Re: Declaration of Independence
"certain unalienable rights", quite vague really, in fact it says nothing, and ensures NOTHING..
it tells you some 'rights' "life, liberty and happiness" (the pursuit of).
it does not say happiness is a right, but you have a right to persue it.. not necessarily attain it.
as for "life and liberty" clearly with the death penalty and prisons that is NOT a right either..
you are not issued a "RIGHT" to live when you are born, making that statement totally meaningless.
are you aware of any american in history that has required the honoring of his rights of life, liberty and happiness as detailed in the constitution ??? anyone ?? na.. lol
[ link to this | view in chronology ]
Over my dead body!
[ link to this | view in chronology ]
Three levels of rights:
-Limited(animal level) sentience or ability to 'feel' = Limited rights, along the lines of rules against animal cruelty laws and whatnot, for essentially the same reasons; namely because while a sentience at that level may not be self-aware, or able to hold a conversation, it's a proven fact that ill treatment has negative affects on the individual in question.
-Self-awareness and ability to think on it's own = full rights, same as a human would have, because at that point to refuse equal rights would be just a re-hashing of the same line of thinking that led to slavery: "While you may have the same ability to think as I do, you look different than me, therefor you are lesser than me."
[ link to this | view in chronology ]
http://questionablecontent.wikia.com/wiki/AnthroPC
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
I think so
[ link to this | view in chronology ]
Re: I think so
What is the reason a robot should be given rights? It's only to pacify our quilt.
-CF
[ link to this | view in chronology ]
Re: Re: I think so
The collective works of Douglas R. Hofstadter address this in MUCH detail.
[ link to this | view in chronology ]
Re: Re: Re: I think so
The work of Douglas R. Hofstadter certainly looks like a good starting point.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
of course they have right, if you consider a robot to be some mechanical device that assists humans.
so by that definition a robot would be a mechanical arm, for someone who has lost their arm, or an electric, wheelchear, or a visual aid for a blind person.. all 'robots'..
as such they have the same rights as humans have, it is allready an offense to discriminate against someone with a disability, who requires an "aid" for their disability..
these are by definition robots, they have the right to travel, and function as designed, they have the same rights as the human who requires them.
but to try to so losely define "robots" there is no way you can form a real argument, as there is really so such 'one' 'robot',
[ link to this | view in chronology ]
[ link to this | view in chronology ]
It's reasonable to assume that a sentience-equivalent robot will be capable of listening to the speech of humans, attempting to extract meaning from it, and integrating that meaning into its core programming and future behaviors. It will also be able to respond to questions from humans on any subject within the ever-expanding realm of its programming. If you create a sentience-equivalent robot and I talk to it, it will extract some meaning from it that would, in a way that can be objectively proven, alter how it responds to questions and how it acts in the future, perhaps significantly. A compelling argument could thus be made that sentience-equivalent robots must be protected by law from arbitrary tampering or destruction--whether by their creators, by others, or most importantly by government--because such tampering or destruction would directly interfere with the propagation of ideas throughout the human-robot community.
This, of course, leads to all sorts of interesting and thorny questions. What if I teach the robot you created an idea you disagree with? Can you deprogram it by exercising a right to program your robot like parents have with rights to raise and educate their children? Will we have to have laws that require a minimum programming, either at the factory or by owners? What constitutes "punishment" for a robot that behaves badly? How do we deal with sentience-equivalent robots that their owners don't want or can no longer afford to maintain? Their ability to have a perfect memory could provide valuable insight into the world, perhaps even more insight than a human ever could, so destroying the information they hold could be a terrible loss. Would there be robot orphanages? Robot homeless shelters? Battery banks instead of food banks?
[ link to this | view in chronology ]
I bet they will get rights/protections
[ link to this | view in chronology ]
Re: I bet they will get rights/protections
what right do you have to make a decision about someone elses rights ??
what if having that baby impinges on their right of "the persuit of happiness" ??
you are not a god, you dont get to decide what are or are not "rights" for other people... just one of the stupid things Americans think they have a "right" to do.. you simply dont have that right..
do you honestly believe you have some 'right' to be able to tell someone else what to do, or what not to do ??
who gave you that right ?? where is that right written down ?
It's actually totally disgusting to think there are people like you who somehow think you are able to determine what is or is not the rights of others, apart from yourself..
do you think you think you have a right to protect the armed people or the unarmed people.. or your home.
to carry a gun, to defend your home ??
your a joke, you have no rights and that is the way it should be..
you cannot seperate your religious fanatism with your legal obligations.
[ link to this | view in chronology ]
Re: Re: I bet they will get rights/protections
[ link to this | view in chronology ]
As for friendship, just extend Turing's test: if a robot "friend" acts in a way absolutely indistinguishable from an organic friend in every way, then yes, you can consider it a true friend.
[ link to this | view in chronology ]
I would define it as a creature that has emotions and that can have spontaneous thoughts and ideas, not just reactions to outside stimulus.
Thinking for itself involves more than just making pre-programmed decisions based on a set of pre-programmed conditions.
When a robot can spontaneously decide, all on its own, to re-arrange flowers in a vase because it thinks they look nicer that way, and not because a pre-programmed set of conditions tell it that arrangement A looks better than arrangement B, I'll consider it alive.
[ link to this | view in chronology ]
Absolutely
The right to shut up until spoken to!
The right to start and end every sentence with the word 'sir'!
[ link to this | view in chronology ]
My own feeling is that if it's a simulation, it's merely a device. But if it attains actual sentience (The Moon is a Harsh Mistress) then it's a person, or at least, life.
[ link to this | view in chronology ]
It does not follow
Even if I were to grant every premise in the argument sketched here, it does not provide that we should Legally grant robots rights. It may, perhaps, persuade me that I should treat my robots in a certain fashion and teach my children to do the same, when these hypothetical robots exist.
But that does not mean that courts or law enforcement should be envolved in it. It is rather a moral issue within my family (and arguably more of an exercise, something I do now so that behaving morally when it matters later is easier, rather than something I do for its own sake).
[ link to this | view in chronology ]
[ link to this | view in chronology ]
no need for fear
[ link to this | view in chronology ]