Should Robots Get Rights?

from the be-kind-to-skynet dept

I've written before about robots when someone imagines them rising to the level of cognition. Usually these stories are filled with luddite fear of the coming robot apocalypse. This time, however, let's take a quick trip down a robotic philosophical rabbit hole.

Computerworld has a story questioning whether or not the robots that will be increasingly life-like and ubiquitous in our lives will attain the kind of rights we afford animals.
Imagine that Apple will develop a walking, smiling and talking version of your iPhone. It has arms and legs. Its eye cameras recognize you. It will drive your car (and engage in Bullitt-like races with Google’s driverless car), do your grocery shopping, fix dinner and discuss the day’s news.
But will Apple or a proxy group acting on behalf of the robot industry go further? Much further. Will it argue that these cognitive or social robots deserve rights of their own not unlike the protections extended to pets?
If you're like me, your gut reaction may have been something along the lines of: of course not, idiot. But the article actually raised some interesting questions, based on a paper by MIT researcher Kate Darling.
The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.
Now, this, to me, makes a bit of sense save for one detail. Yes, our values are reflected in the way we treat some animals, but there seems to be a vast difference between organic life and cognitive devices. Robots, afterall, are not life, or at least not organic life. They are simulations of life. This is, of course, where the rabbit hole begins to deepen as we have to confront some tough philosophical questions. How do you define life? If at some level we're all just different forms of energy, is the capacity to think and reason enough to warrant protection from harm? Can a robot be a friend, in the traditional sense of the word?

But, putting aside those questions for a moment and assuming robots do attain some form of rights and protection in the future, this little tidbit from the article made me raise my eyebrows.
Apple will patent every little nuance the robot is capable of. We know this from its patent lawsuits. If the robot has eyebrows, Apple may file a patent claiming rights to “a robotic device that can raise an eyebrow as a method for expressing skepticism.”
Here's where we may find commonality with our metallic brethren. With the expanded allowance for patenting genes, it becomes all the more likely that the same codes that manufacture our humanity could indeed be patented in the way that a robots manufactured "humanity" would be. If robotics progresses to produce something along the lines of EDI, the very things that make her "human" enough to be worthy of rights will be locked up in an increasingly complicated patent system. And, with our courts falling on the side of gene patents for humans, we've virtually ensured that all of that robotic humanity will indeed be patentable.

On the other hand, what happens if future courts rule that human genes cannot be patented? And then what happens if we do indeed define some kind of rights structure for our robotic "friends"? Do those rights open up the possibility that robotic "genes" should not then be patented?
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: rights, robots


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Pixelation, 7 Sep 2012 @ 6:53pm

    "Should Robots Get Rights?"

    NO!

    link to this | view in chronology ]

  • icon
    saulgoode (profile), 7 Sep 2012 @ 6:53pm

    If the bobble heads at the Patent Office continue on the path they are currently following then we can certainly expect a rush of patents on all kinds of human activity with the caveat of it being done "with a robot" -- e.g., dig a hole with a robot, change a tire with a robot, build a swing set with a robot -- just as "with a computer" seems to justify patents being issued on things such as getting feedback from a buyer or scrolling through a document.

    link to this | view in chronology ]

  • identicon
    slick8086, 7 Sep 2012 @ 7:23pm

    "This logically extends to the treatment of robotic companions."

    Um no it doesn't. You can't be cruel or nice to a machine. They don't have feelings, they don't suffer or feel pleasure. There is nothing there to evoke empathy. Anthropomorphizing machines is not healthy.

    link to this | view in chronology ]

  • icon
    Laroquod (profile), 7 Sep 2012 @ 7:29pm

    Frakking toasters! Kill them all.

    Seriously, though, robots should have rights the moment they start to care.

    link to this | view in chronology ]

    • identicon
      Jason, 7 Sep 2012 @ 9:40pm

      Re:

      Exactly. Let the robots decide. If they can actually want rights, then we should at that point acknowledge they have rights.

      link to this | view in chronology ]

      • icon
        latin angel (profile), 8 Sep 2012 @ 7:46am

        Re: Re:

        Like non-white people on the past?

        The thing is, I dont want my child to beat the crap out of a "realist" robot pet... or child.

        That actions will be buried on their minds.

        link to this | view in chronology ]

        • identicon
          Jason, 8 Sep 2012 @ 9:01am

          Re: Re: Re:

          But would you send a robot to disarm a bomb to save your child?

          Not encouraging children to play out bad behavior is not the same thing as acknowledging robot rights.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 9 Sep 2012 @ 8:51am

            Re: Re: Re: Re:

            Would this be the same robot that is lifelike enough to fool me into thinking it's a cat?

            Then no.

            Would this be the current bomb-disarming robot, which is a glorified R/C tool? Then why not?

            link to this | view in chronology ]

            • identicon
              Jason, 9 Sep 2012 @ 7:06pm

              Re: Re: Re: Re: Re:

              And no, the robot I was thinking of was sort of both. It's one thing to have an RC bot that is kinda good with bombs.

              It's another thing to have one that's intuitive, notices clues on its own, etc. The sort of thing that only an intelligence could do. That would make a decidedly better bomb squad tool?

              So that, "Would you send it in to risk its life?" is then a parallel question that serves to separate the two issues that latin angel was confounding.

              That was my point.

              link to this | view in chronology ]

          • icon
            drew (profile), 9 Sep 2012 @ 10:54am

            Re: Re: Re: Re:

            I think it's a Heinlein book where he says "a gentleman is someone who says 'thank you' to his robot".

            link to this | view in chronology ]

  • identicon
    DMNTD, 7 Sep 2012 @ 7:32pm

    yes...

    For sure, the moment you start denying that robots won't need rights the sooner history repeats itself. If you don't want the responsibility...don't make advanced robots or A.I.

    link to this | view in chronology ]

  • icon
    Forest_GS (profile), 7 Sep 2012 @ 7:39pm

    A completely self-contained unit designed to have rights should have rights.

    link to this | view in chronology ]

  • identicon
    onyx, 7 Sep 2012 @ 7:51pm

    Robot Rights

    The only reason a robot may need rights is if they are created to be a human emulation.
    When building a robot from the ground up the maker has virtually unlimited choices. Do you want a robot to muck out sewers, then don't give them a sense of smell. If you want a robot to follow orders then just program it so that it's greatest desire is to obey a humans every command.
    If you want a slave with no rights then don't give the robot a desire for those rights. Make them enjoy living in servitude.
    Why would we need to give robots rights if we make them without the capacity for that desire.

    link to this | view in chronology ]

    • identicon
      varagix, 8 Sep 2012 @ 11:08am

      Re: Robot Rights

      That an issue a lot of science fiction tries to address. It's easy to say "we didn't intend to make it that way" but there might come a time where robotics and AI becomes so advanced that, whether through a glitch or through intentional design, perhaps quickly or maybe slowly over time, a robotic creation becomes self aware and gains sentience and sapience.

      Granting rights to these individuals, and to any 'race' that arises from these electronic 'mutations' is something we need to give serious thought to.

      link to this | view in chronology ]

  • identicon
    Jayce, 7 Sep 2012 @ 8:28pm

    why bother?

    Until we actually consider human beings to have truly uninfringeable rights the idea of giving robots rights is laughable.

    Oh, you say we do? Think so? Go survey the world on that a bit. Hell, just start with the supposedly enlightened nations and see just how limited those inalienable rights are.

    link to this | view in chronology ]

  • identicon
    Lawrence D'Oliveiro, 7 Sep 2012 @ 8:33pm

    With Rights Come Responsibilities

    Humans only get rights because we’re expected to be able to take responsibility for the consequences of our actions. We have the right to free speech because we have to be able to deal with the consequences if somebody doesn’t like what we say. We have the right to spend our money how we choose because we have to be able to cope with the consequences of spending it on the wrong things.

    The only humans who get rights without responsibilities are children. They get those rights because they are expected to grow into mature adults someday, whereupon they assume the full responsibilities, along with the full rights to independent action, of an adult.

    Animal rights don’t make sense on this basis, because animals will always remain animals, they can never take on the full responsibilities of a mature human adult.

    In the same way, robot rights don’t make sense for present-day robots. If future robots become smart enough to be difficult or impossible to distinguish from mature human adults, then that becomes a different matter...

    link to this | view in chronology ]

    • icon
      Chronno S. Trigger (profile), 7 Sep 2012 @ 8:48pm

      Re: With Rights Come Responsibilities

      But the question becomes "Where is that line drawn?" Does it have to be comparable to a human adult? Any human adult, or just intelligent ones? What about a robot child? What if it was comparable to say Cletus from the Simpsons?

      But part of the point of the article isn't human level robots, but pet level robots. Would it be cruel to kick a robotic cat if it was a walking, meowing, thinking cat? If it was truly an AI of a cat brain, should it not be treated with some care?

      These are the hypothetical questions being asked. And how we answer those questions when AI comes around will determine if we have a robot apocalypse or plastic palls who are fun to be with.

      link to this | view in chronology ]

      • identicon
        Lawrence D'Oliveiro, 9 Sep 2012 @ 3:09am

        Re: Would it be cruel to kick a robotic cat if it was a walking, meowing, thinking cat?

        We could program it to enjoy being kicked.

        Remember in The Hitchhiker’s Guide To The Galaxy, there was intelligent cattle, bred specifically to enjoy being killed and eaten?

        link to this | view in chronology ]

  • icon
    Spointman (profile), 7 Sep 2012 @ 9:05pm

    This is not a new question. The debate about the humanity of robots is just about as old as the word "robot" itself. Look up the short story/novella "The Bicentennial Man" by Isaac Asimov (or, if you're lazy, the Robert Williams movie), or his "I, Robot" series of novels.

    At a fundamental level, a human being is a very advanced supercomputer powered by carbon-based circuits and fueled by oxygen, as opposed to our current computers with silicon circuits which are fueled by electrons. Science teaches us that a single-celled self-replicating bacterium is alive. Even a virus, which contains little more than instructions to reproduce encoded into chemicals, is considered alive.

    By that definition, a modern computer virus could certainly be considered alive. Siri is not that far from passing a Turing test. Combine the two, and you have a dilemma on your hands.

    Inevitably, computers will become smarter than people, more capable, more efficient. That includes the ability to feel and to think. A computer AI housed in a humanoid body created by a human (or by another computer) will be no different than a baby's intelligence, housed in a frail human form, born from his mother. Just like a baby, the computer will learn, and grow, and adapt.

    It's not unreasonable that in our lifetime, we will have to answer the question asked here as a hypothetical, but under very real circumstances, in a congress or parliament, or in a court of law.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 7 Sep 2012 @ 9:26pm

      Re:

      Robert Williams????

      link to this | view in chronology ]

    • icon
      Spointman (profile), 7 Sep 2012 @ 9:41pm

      Re:

      Gah. Robin Williams, not Robert Williams. I'm a goof. Also, Asimov coined the term "robotics"; the term "robot" was coined by Karel Čapek in 1920 for a play which indirectly dealt with this very topic.

      link to this | view in chronology ]

  • identicon
    Jason, 7 Sep 2012 @ 9:55pm

    Understanding

    In his famous Ender Series (Serieses?), Orson Scott Card puts forth a Hierarchy of Foreignness which flips the question:

    "The difference between ramen and varelse is not in the creature judged, but in the creature judging. When we declare an alien species to be ramen, it does not mean that they have passed a threshold of moral maturity. It means that we have."

    I'm DEAD CERTAIN that I DON'T get how that applies to robots. So screw 'em.

    http://en.wikipedia.org/wiki/Concepts_in_the_Ender%27s_Game_series#Hierarchy_of_Foreignness

    link to this | view in chronology ]

    • icon
      Niall (profile), 10 Sep 2012 @ 5:51am

      Re: Understanding

      Well, look at the Star Trek: Next Gen episode involving whether Data counted as 'alive' and worthy of rights - and this was in a universe with super-intelligent shades of the colour blue (oops, wrong universe ;)!

      The whole point of the Hierarchy is that we are advanced enough to treat a being as a mindless animal, a hated enemy, or another being to be understood - even if kept at a (safe) distance. So we treat animals according to a hierarchy already, as we do humans - and we would aliens. So why not robots? Just like most people don't worry about a fish's rights, they probably shouldn't worry about the average car machine-line robot.

      However, even if a robot isn't self-aware or requesting rights, it 'de-humanises' us to treat it like garbage, and teaches those around us to do so too. Respect begets respect. For more to think about, there's the Broken Windows Theory.

      link to this | view in chronology ]

  • icon
    Austin (profile), 7 Sep 2012 @ 10:33pm

    Yes, because...

    One day we will have a computer that is small, portable, and capable of emulating the human brain with 100% efficiency.

    When this day comes, we have to assume we will either have already, or will soon thereafter develop the ability to map a fully developed human brain, and between these two technologies, the inevitable will happen - humans will BECOME robots.

    This has myriad benefits. Instant communication across the galaxy, with 100% privacy control. The ability to share emotions directly, not just language. The ability to disconnect our minds from our form. Bored being a biped? Fine, upload yourself into a rocket or airplane or submarine body and go exploring. We won't need homes. We won't need food. Nor sleep. Nor even air. As long as we can get within proximity of a star to recharge our batteries, we're golden. And when we feel like being around others? Simply connect to the central server and commune with everyone else in existence because we have finally achieved the ULTIMATE form of humanity - raw data.

    So yes, we need robot rights, because one day I intend to be one, and I'll be damned if I'm going to wind up as some meatbag's bitch.

    link to this | view in chronology ]

  • identicon
    Pixelation, 7 Sep 2012 @ 11:25pm

    The surest way to fuck up robots will be to make them like us and give them rights.
    What humanized robots can look forward to;

    1) Neuroses

    2) The robot RIAA

    3) Being marginalized by the robot government

    4) Taxes

    5) Fighting for their robot rights

    6) Being forced to kill all humans by the robot overmind

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Sep 2012 @ 1:32am

    This is ridiculous, unlike in science fiction, our current methods of programming can never produce a program that can achieve consciousness. Perhaps if we somehow duplicate a brain with all it's functions, then we ask this question. I say this as a computer science student that took AI classes.

    link to this | view in chronology ]

  • icon
    Martin Thomas (profile), 8 Sep 2012 @ 4:38am

    Probably in the far future ...

    We do not yet have a clear idea what exactly it is about human brains that causes them to experience anything. Some people think we are close to understanding it; others call it the "hard problem", because it appears to be the most difficult problem that science faces. I am assuming that it will eventually be understood and then we will be able to make robots which are every bit as self aware and alive as we are. Then we will face all these problems!

    In the meantime, we will very soon have robots that appear to be human and to have human thoughts and feelings. Many people will be very happy with this, some may react violently.

    If 10,000,000 young children believe that their kiddy-bots are alive, what to do if people begin to smash them up publicly?

    link to this | view in chronology ]

  • icon
    Hephaestus (profile), 8 Sep 2012 @ 4:39am

    It depends

    Should a Rumba, Toaster, 3d house printing robot be given rights? The answer is no.

    Should a large scale MolSID (Molecular Scale Integration Device - Nanotech) device containing a human or human like intelligence be given rights? The answer is yes.

    There are so many other questions that also need to be answered.

    - Who is responsible when a programming glitch makes all Google cars run amok and kill people?

    - Should the above event lead to civil or criminal charges?

    - If you delete the last back up of a human mind is that murder?

    - If you delete an AI that has human like intelligence is that murder?

    - If you have a back up of your mind, can the police get a search warrant to go through it?

    - How do you handle copyright on music, video, and books in backups of human minds?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Sep 2012 @ 5:27am

    Andromeda is a good series to watch if you want some idea of the consequences of AI being given the same general due consideration as living beings :)

    One of my favourite openings to the show was this:

    "You ask why we give our warships emotion? Would you really want a ship incapable of loyalty?

    Or of love?"

    Of course, the flipside is also true and on several occasions problems are created by an AI deciding it doesn't feel like playing nice any more. Slippery slope, the whole AI deal.

    link to this | view in chronology ]

  • icon
    art guerrilla (profile), 8 Sep 2012 @ 6:18am

    um, i've got a better question...

    when will humans have rights ? ? ?

    art guerrilla
    aka ann archy
    eof

    link to this | view in chronology ]

  • identicon
    R2D2, 8 Sep 2012 @ 7:51am

    Robot Rights

    "Should Robots Get Rights?"

    No. The question is stupid, and lowers (or raises?) political correctness to an insane level. Get real.

    link to this | view in chronology ]

  • icon
    nospacesorspecialcharacters (profile), 8 Sep 2012 @ 8:50am

    Freedom is defined by the option to disobey...

    I think the answer is in Genesis, no really!

    Let's assume the Genesis account in the bible is entirely literal. God creates Adam 1.0, God tells Adam here is the walled garden - you can do anything you like in this, I'm even going to let you name everything.

    God has created effectively a sandbox for a program to run in and grow and learn. But God was not satisfied with just having a machine with no intelligence, therefore he introduces the Tree of Source Code. He then tells Adam that he can do anything he likes in the walled garden, but cannot touch the Tree of Source Code, or Adam 1.0 will surely be obsolete.

    God forks Adam 1.0 into Eve Beta. Eve interacts with the trojan Snake virus and we have eventually both Adam and Eve choosing to disobey their original makers programming.

    The reality is, God, didn't need to put the Tree of Life in the Garden - his creations could have happily lived and evolved inside the sandbox with no ability to develop outside of his original programming. By putting the Tree of Life into the Garden, he created an opportunity for Adam and Eve to exercise free will in obeying or disobeying the instructions of their maker.

    This is why I roll my eyes when people seem to think it's just a matter of 'programming' Asimov's 3 rules. If we apply this analogy to robots, then assuming we will even manage to get as far as reproducing a robot as nuanced as a human being, we'd have to program it to have a choice in whether it would attack or kill us. We'd have to give it a real choice to disobey - otherwise they will always be 'slaves'.

    I personally don't think we will go this direction. Mark Kennedy once said:

    "All of the biggest technological inventions created by man - the airplane, the automobile, the computer - says little about his intelligence, but speaks volumes about his laziness."


    We tend to invent to fulfill a purpose or function. We don't program mobile phones not to kill humans because mobile phones are practically unable to kill humans unassisted. Same as we don't program it into our printers, computers, TV's, cars, planes.

    Robots will be invented to fulfill functions and purposes. The military will use them to kill civilians and combatants in far off middle eastern countries, the red cross will use them to pull people from rubble or administer basic first aid in war zones. But we'll never see a military robot become a conscientiousness objector because they won't be given that programming. We'll never see a first aider robot decide this person isn't worth saving.

    Finally check out Big Dog - https://www.youtube.com/watch?v=W1czBcnX1Ww It literally scares the shit out of me that this is what could be chasing people in the future - whether for war, policing, bounty hunting. Look at how the scientist slams his boot into the side of it - if that was a horse or a person we'd be horrified. Big Dog is built for a purpose - not for love or affection.

    Personally it makes me want to learn how to quickly disable these things or evade them.

    link to this | view in chronology ]

    • identicon
      Jason, 8 Sep 2012 @ 9:14am

      Re: Freedom is defined by the option to disobey...

      "Personally it makes me want to learn how to quickly disable these things or evade them."

      Well, the lawnmower chant music would certainly make that easier.

      link to this | view in chronology ]

  • icon
    Josh (profile), 8 Sep 2012 @ 9:32am

    Robot rights

    I don't know if you read Questionable Content, a webcomic, but they have "AnthroPCs", sentient computer companions. It's not a scifi comic, it just happens to have some futuristic devices. The point is, the author has written a fictional UN hearing on the subject of robot rights, and you might find it interesting. http://jephjacques.com/post/14655843351/un-hearing-on-ai-rights

    link to this | view in chronology ]

  • icon
    designguybrown (profile), 8 Sep 2012 @ 9:32am

    As long as they get what we get

    As long as they get a job and pay taxes and shut the hell up, they can have all the rights they want.

    link to this | view in chronology ]

  • identicon
    ld, 8 Sep 2012 @ 1:47pm

    Declaration of Independence

    "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."
    It's simple, whoever creates the robots should get to decide what the rules and limitations regarding their treatment should be.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Sep 2012 @ 5:48am

      Re: Declaration of Independence

      you left the bit out about blacks being only 'half' "equal".. why do you Americans always do that ??

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Sep 2012 @ 5:54am

      Re: Declaration of Independence

      it also say "men" meaning "manking" or PEOPLE !!! not machines, devices, you will also not that there is NO definition of what "rights" you are endowed with !!..

      "certain unalienable rights", quite vague really, in fact it says nothing, and ensures NOTHING..

      it tells you some 'rights' "life, liberty and happiness" (the pursuit of).

      it does not say happiness is a right, but you have a right to persue it.. not necessarily attain it.

      as for "life and liberty" clearly with the death penalty and prisons that is NOT a right either..

      you are not issued a "RIGHT" to live when you are born, making that statement totally meaningless.

      are you aware of any american in history that has required the honoring of his rights of life, liberty and happiness as detailed in the constitution ??? anyone ?? na.. lol

      link to this | view in chronology ]

  • identicon
    Dex, 8 Sep 2012 @ 4:22pm

    Over my dead body!

    They will have to fight for them in the Great Robot Uprising of 2055 like everyone else!

    link to this | view in chronology ]

  • icon
    That One Guy (profile), 8 Sep 2012 @ 5:21pm

    Three levels of rights:

    -Current lack of sentience, ability to 'feel' or want one thing over another = no rights, as it doesn't matter one way or another.

    -Limited(animal level) sentience or ability to 'feel' = Limited rights, along the lines of rules against animal cruelty laws and whatnot, for essentially the same reasons; namely because while a sentience at that level may not be self-aware, or able to hold a conversation, it's a proven fact that ill treatment has negative affects on the individual in question.

    -Self-awareness and ability to think on it's own = full rights, same as a human would have, because at that point to refuse equal rights would be just a re-hashing of the same line of thinking that led to slavery: "While you may have the same ability to think as I do, you look different than me, therefor you are lesser than me."

    link to this | view in chronology ]

  • icon
    Any Mouse (profile), 8 Sep 2012 @ 5:54pm

    Somebody's been reading Questionable Content...

    http://questionablecontent.wikia.com/wiki/AnthroPC

    link to this | view in chronology ]

  • identicon
    Shmerl, 8 Sep 2012 @ 6:13pm

    If you use Firefox, try to type about:robots in the address bar ;)

    link to this | view in chronology ]

  • identicon
    Bengie, 8 Sep 2012 @ 6:15pm

    I think so

    *Anything* that is conscious of itself I think should have rights.

    link to this | view in chronology ]

    • icon
      ChronoFish (profile), 8 Sep 2012 @ 7:29pm

      Re: I think so

      The problem is that we will be splitting hair over mimicking consciousness and actual consciousness. Is the appearance of consciousness enough?

      What is the reason a robot should be given rights? It's only to pacify our quilt.

      -CF

      link to this | view in chronology ]

      • icon
        Dirkmaster (profile), 10 Sep 2012 @ 9:53am

        Re: Re: I think so

        We would need to have a better understand of from-where OUR consciousness arises. There's no use in talking about simulating conscousness if we can't prove that we ourselves are "just" simulating it.

        The collective works of Douglas R. Hofstadter address this in MUCH detail.

        link to this | view in chronology ]

        • icon
          Martin Thomas (profile), 11 Sep 2012 @ 6:26am

          Re: Re: Re: I think so

          I suspect that we will not manage to build a computer with even a very good simulation until we get a better idea of how our brains manage to be concious.

          The work of Douglas R. Hofstadter certainly looks like a good starting point.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Sep 2012 @ 10:16pm

    If robots ever get to the point by being totally in control with no programming to control their decisions, then yes. But we should never let them get to that point. ie - if you ask your robot to do something and it/he/she replies "No, want to watch this show on TV" or "Do it yourself" - if they get to that point, they should have rights. But allowing them to get to that point, brings up the Terminator style possibility.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Sep 2012 @ 12:29am

    first what is your definition of "robot" ????

    of course they have right, if you consider a robot to be some mechanical device that assists humans.

    so by that definition a robot would be a mechanical arm, for someone who has lost their arm, or an electric, wheelchear, or a visual aid for a blind person.. all 'robots'..

    as such they have the same rights as humans have, it is allready an offense to discriminate against someone with a disability, who requires an "aid" for their disability..

    these are by definition robots, they have the right to travel, and function as designed, they have the same rights as the human who requires them.

    but to try to so losely define "robots" there is no way you can form a real argument, as there is really so such 'one' 'robot',

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Sep 2012 @ 10:12am

    Only if they are gay robots.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Sep 2012 @ 3:03pm

    If robots get to the point where we're seriously debating whether they're sentient or not, more likely than an argument that they should be given human-equivalent "rights" (under a theory that they are conscious or something) will be an argument that existing law requires it to protect human rights. In particular, I think the First Amendment and its equivalents in other nations will make this conclusion unavoidable without regard to whether we view robots as conscious entities or machines.

    It's reasonable to assume that a sentience-equivalent robot will be capable of listening to the speech of humans, attempting to extract meaning from it, and integrating that meaning into its core programming and future behaviors. It will also be able to respond to questions from humans on any subject within the ever-expanding realm of its programming. If you create a sentience-equivalent robot and I talk to it, it will extract some meaning from it that would, in a way that can be objectively proven, alter how it responds to questions and how it acts in the future, perhaps significantly. A compelling argument could thus be made that sentience-equivalent robots must be protected by law from arbitrary tampering or destruction--whether by their creators, by others, or most importantly by government--because such tampering or destruction would directly interfere with the propagation of ideas throughout the human-robot community.

    This, of course, leads to all sorts of interesting and thorny questions. What if I teach the robot you created an idea you disagree with? Can you deprogram it by exercising a right to program your robot like parents have with rights to raise and educate their children? Will we have to have laws that require a minimum programming, either at the factory or by owners? What constitutes "punishment" for a robot that behaves badly? How do we deal with sentience-equivalent robots that their owners don't want or can no longer afford to maintain? Their ability to have a perfect memory could provide valuable insight into the world, perhaps even more insight than a human ever could, so destroying the information they hold could be a terrible loss. Would there be robot orphanages? Robot homeless shelters? Battery banks instead of food banks?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Sep 2012 @ 6:19pm

    I bet they will get rights/protections

    We can't protect the unborn but we are considering protecting robots? Talk about an inhumane society.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Sep 2012 @ 6:47pm

      Re: I bet they will get rights/protections

      of course you cant protect the unborn, nor should you, you simply do not have that right..

      what right do you have to make a decision about someone elses rights ??

      what if having that baby impinges on their right of "the persuit of happiness" ??

      you are not a god, you dont get to decide what are or are not "rights" for other people... just one of the stupid things Americans think they have a "right" to do.. you simply dont have that right..

      do you honestly believe you have some 'right' to be able to tell someone else what to do, or what not to do ??

      who gave you that right ?? where is that right written down ?

      It's actually totally disgusting to think there are people like you who somehow think you are able to determine what is or is not the rights of others, apart from yourself..

      do you think you think you have a right to protect the armed people or the unarmed people.. or your home.

      to carry a gun, to defend your home ??

      your a joke, you have no rights and that is the way it should be..

      you cannot seperate your religious fanatism with your legal obligations.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 9 Sep 2012 @ 11:15pm

        Re: Re: I bet they will get rights/protections

        You have no right to live, darryl. Do yourself a favour and jump out a window off a 27-storey building, would ya?

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Sep 2012 @ 8:36pm

    Seems rather pointless to distinguish between "organic" and "non-organic" life. Organic life is just a machine too.

    As for friendship, just extend Turing's test: if a robot "friend" acts in a way absolutely indistinguishable from an organic friend in every way, then yes, you can consider it a true friend.

    link to this | view in chronology ]

  • identicon
    Rekrul, 9 Sep 2012 @ 10:48pm

    How do you define life?

    I would define it as a creature that has emotions and that can have spontaneous thoughts and ideas, not just reactions to outside stimulus.

    Thinking for itself involves more than just making pre-programmed decisions based on a set of pre-programmed conditions.

    When a robot can spontaneously decide, all on its own, to re-arrange flowers in a vase because it thinks they look nicer that way, and not because a pre-programmed set of conditions tell it that arrangement A looks better than arrangement B, I'll consider it alive.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 10 Sep 2012 @ 5:16am

    Absolutely

    They get the right to obey my orders!
    The right to shut up until spoken to!
    The right to start and end every sentence with the word 'sir'!

    link to this | view in chronology ]

  • icon
    Jeffrey Nonken (profile), 10 Sep 2012 @ 10:16am

    Ah yes, the old golem stories, but with rockets.

    My own feeling is that if it's a simulation, it's merely a device. But if it attains actual sentience (The Moon is a Harsh Mistress) then it's a person, or at least, life.

    link to this | view in chronology ]

  • icon
    TimothyAWiseman (profile), 10 Sep 2012 @ 10:37am

    It does not follow

    " Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient."

    Even if I were to grant every premise in the argument sketched here, it does not provide that we should Legally grant robots rights. It may, perhaps, persuade me that I should treat my robots in a certain fashion and teach my children to do the same, when these hypothetical robots exist.

    But that does not mean that courts or law enforcement should be envolved in it. It is rather a moral issue within my family (and arguably more of an exercise, something I do now so that behaving morally when it matters later is easier, rather than something I do for its own sake).

    link to this | view in chronology ]

  • identicon
    Albert W. Peters, 24 Oct 2013 @ 3:40am

    Nanotechnology is a mean to mimic human brain. Neuromorphic engineering already shows the first results. Some experiments has proven the ability of nanomaterials to arrange the same way as neurons to store data.

    link to this | view in chronology ]

  • identicon
    Azrothz, 25 Apr 2014 @ 4:05am

    no need for fear

    There no need for fear over advance robotic homo sapien. I mean yes like a human ( homo sapien) you don't know the outcome when making , raising , or teaching it right from wrong. Right now robot not even close to be independent and self learning, they are like a baby. Till then robot are just tools. I know robot are going to be in wars when they are advance enough, but why teach them just war when you teach peace. Also parent system is the same as parenting a real baby, but the same with humans, you need to take off the training wheels and let it be independent. So robots are tools for now but "when they advance" we must teach them right from wrong, humanity, equality, and un prefectness.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.