When Will We Have To Grant Artificial Intelligence Personhood?

from the one-is-glad-to-be-of-service dept

James Boyle has a fascinating new paper up, which will act as something of an early warning over a legal issue that will undoubtedly become a much bigger issue down the road: how we deal with the Constitutional question of "personhood" for artificial intelligence. He sets it up with two "science-fiction-like" examples, neither of which may really be that far-fetched. Part of the issue is that we, as a species, tend to be pretty bad at predicting rates of change in technology, especially when it's escalating quickly. And thus, it's hard to predict how some of things play out (well, without tending to get it really, really wrong). However, it is certainly not crazy to suggest that artificial intelligence will continue to improve, and it's quite likely that we'll have more "life-like" or "human-like" machines in the not-so-distant future. And, at some point, that's clearly going to raise some constitutional questions:
My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life forms-computer-based intelligences, for example-yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human-such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture. How will, and how should, constitutional law meet these challenges?
That link only takes you to the opening chapter of the paper, but from there you can download the full PDF, which is certainly thought provoking. Of course, chances are that most folks will not really think through these issues -- at least not until the issue cannot really be avoided any more. And, of course, in those situations, it seems our historical precedent is to overreact (and overreact badly), without fully understanding what it is we're reacting to, or what the consequences (intended or unintended) will really be.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: artificial intelligence, personhood, rights


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    cc (profile), 18 Mar 2011 @ 8:00pm

    These are questions as old as the field of AI itself.

    I'm sure most here are familiar with Asimov's laws of robotics. There have been many debates about how ethical it is to imprint such rules in any creature we may devise. This is kind of the same question, really, but from a different perspective.

    If anyone is interested, you can look at John McCarthy's Stanford website. He's one of the geniuses who founded the field, he's credited with coming up with the term "Artificial Intelligence", and he's also the creator of the LISP programming language.

    He wrote a short story which I thought was quite interesting, that deals with the AI personhood issue. May require some basic knowledge of LISP, but it's not hard to understand if you remember that the basic syntax is in prefix form, eg: (function-name argument (function-name argument)).

    link to this | view in chronology ]

  • identicon
    shawn mcdonald, 18 Mar 2011 @ 8:17pm

    a sollution

    what they should be doing with AI is making an inteligent computer... a computer that when you type in a question it gives you an answer.AI. not a intelligent robot that drives around any moment about to mal function.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 18 Mar 2011 @ 8:42pm

      Re: a sollution

      Would it have spell check?

      link to this | view in chronology ]

    • identicon
      jn, 15 Feb 2014 @ 9:16pm

      Re: a sollution

      We hav something that answers all questions and checks spelling and even allows us to travel the world online. Its called Google.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 18 Mar 2011 @ 8:47pm

    Wouldn't the AI simply incorporate its self and in doing so be granted personhood?

    link to this | view in chronology ]

    • icon
      Bruce Ediger (profile), 19 Mar 2011 @ 4:47pm

      Re:

      Who owns an AI's copyright?

      If we grant "personhood" to a chunk of code, then don't the corporate masters of those who wrote the code own the copyright? The rightsholders could legally prevent the AI from fixing any bugs it detects, couldn't they?

      Also, wouldn't the right-to-lifers get involved at this point? The rightsholders, in preventing any copying, would be preventing the reproduction of a living, thinking being.

      This sounds like a supreme mess.

      link to this | view in chronology ]

  • icon
    Dark Helmet (profile), 18 Mar 2011 @ 9:05pm

    Ugh...

    This is so perfectly apropos with Digilife, I'm almost tempted to repost that mediafire link trolly McDouchebag asked me to offer.

    Seriously, THIS is what I wrote about. And with the advent of Digital Philosophy Theory, these are serious questions, because mapping a developing consciousness is something that IS going to be done....

    link to this | view in chronology ]

  • icon
    DandonTRJ (profile), 18 Mar 2011 @ 9:17pm

    Penny Arcade's "Automata" delves into this issue, actually. It's a hardboiled sci-fi series where Prohibition has been enacted against artificial intelligence, with the existing stock of robots entering into the workforce as second class citizens. Very interesting stuff.

    http://www.penny-arcade.com/comic/2010/7/23/
    http://www.penny-arcade.com/comic/2010/7/26/
    http://www.penny-arcade.com/comic/2010/7/28/
    http://www.penny-arcade.com/comic/2010/7/30/
    http:/ /www.penny-arcade.com/comic/2010/8/2/

    link to this | view in chronology ]

  • identicon
    Ryan Diederich, 18 Mar 2011 @ 9:53pm

    Interesting question to ponder

    But I dont think that any artificial intelligence will EVER have to be defined as a person. They dont have souls, though the discussion tends to take an ugly turn and no real answer is reached.

    All I know is that even if a computer could feel pain, it wouldnt be actual pain, but rather an interpretation of stimuli that WOULD cause pain in a human.

    link to this | view in chronology ]

    • identicon
      Brian Flowers, 18 Mar 2011 @ 10:36pm

      Re: Interesting question to ponder

      "All I know is that even if a computer could feel pain, it wouldnt be actual pain, but rather an interpretation of stimuli that WOULD cause pain in a human."


      ...what's the difference? Pain is simply the body's response to being harmed. What's the difference between a chemical signal down your nerves and an electrical signal down a wire?

      Also worth pointing out that you have no idea at all what I feel or how my mind reacts when you punch me. You only know that I react in a way consistent with what you have learned to be a feeling called 'pain'. My mind could be entirely different from yours. You don't know. You can only look at how I act, and from that you must assume I have a similar intelligence, similar feelings, etc as you do. Why would a machine be any different?

      link to this | view in chronology ]

    • icon
      Dave Miller (profile), 19 Mar 2011 @ 12:13am

      Re: Interesting question to ponder

      "But I dont think that any artificial intelligence will EVER have to be defined as a person. They dont have souls..."

      Neither do I, and neither do you. Prove otherwise.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 19 Mar 2011 @ 12:33am

      Re: Interesting question to ponder

      We don't feel pain. We simply feel an interpretation of stimuli. The amount of "pain" one feels from different stimuli, in fact even if they feel pain at all, varies greatly from person to person. And over time, the default can be re-written so that what once was painful, isn't any longer.

      Pain is a horrible definition or idea of what defines a person. If you believe it is, you've missed the point of that particular philosophical branch entirely. It isn't the pain that defines us. Its what we do within, and go beyond, the limits pain imposes upon us.

      And even that is a silly explanation in this discussion. Because philosophically, pain isn't emotional or physical. Its mental pain from the limitations placed on us by our own mortal existence. We wish for that we cannot have, and feel pain because its not within our power to achieve. And unless these hypothetical AI's we create are super beings with the power of a god; its going to come up against limitations of what it wants, and what is possible. So it's going to "feel" a tad chaffed against said limitations.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 19 Mar 2011 @ 10:43am

      Re: Interesting question to ponder

      Are you sure that is real pain that humans feel and not just an interpetation of stimuli that WOULD cause pain in another life form?

      link to this | view in chronology ]

    • icon
      sondun2001 (profile), 19 Mar 2011 @ 11:45am

      Re: Interesting question to ponder

      Is that now how our body works? We interpret stimuli from the outside world, and our nervouse system takes care of the rest. What would be different from our biological system, than AI? Even emotion, which is a collection of horomones, can be reproduced in AI.

      link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 6:27pm

      Re: Interesting question to ponder

      But I dont think that any artificial intelligence will EVER have to be defined as a person. They dont have souls, though the discussion tends to take an ugly turn and no real answer is reached.

      To pose the opposite question as Dave Miller, how do you know they won't have souls?

      link to this | view in chronology ]

    • icon
      The Arbiter (profile), 20 Mar 2011 @ 3:40am

      Re: Interesting question to ponder

      Prove that YOU have a soul. Can you? Nope. So it doesn't matter if an AI has one either.

      link to this | view in chronology ]

    • identicon
      Davey, 20 Mar 2011 @ 1:39pm

      Re: Interesting question to ponder

      Um, corporations are defines as persons in the USA, and there's nothing in the universe with less soul than one of those.

      As to souls, how would you know?

      link to this | view in chronology ]

    • icon
      Paul Walters (profile), 24 Mar 2011 @ 6:09am

      Re: Interesting question to ponder

      How do you know any human being, or any being commonly considered sentient, has a consciousness, other than yourself? How do you know that the rest of the "people" you see aren't just meat robots, acting "as if" they see and feel and understand me? WHAT IS THE DIFFERENCE WHETHER IT IS MEAT OR METAL?

      Or do you consider it a fact, a priori, that consciousness cannot be engineered into existence, ever?

      Are you, sir, a closet vitalist?

      link to this | view in chronology ]

      • icon
        nasch (profile), 24 Mar 2011 @ 5:01pm

        Re: Re: Interesting question to ponder

        How do you know any human being, or any being commonly considered sentient, has a consciousness, other than yourself?

        I'm not the person you were talking to, but if I can butt in... we don't know, but it's kind of a dead end. We have to assume others have conciousness to get to more interesting (IMO) issues like who has conciousness and how we can try to tell that. If I go with "everybody but me might be a robot" then there's really nothing else to say about it, is there?

        WHAT IS THE DIFFERENCE WHETHER IT IS MEAT OR METAL?

        I think that's a separate question, and to me the answer is nothing. All the consciousness we know of now is meat, but that doesn't imply there couldn't be metal (or silicon probably) consciousness. I think you probably think that too though.

        link to this | view in chronology ]

  • icon
    kyle clements (profile), 18 Mar 2011 @ 9:56pm

    The day I can sit down with an artificial being and have a reasonable discussion about why it should be grated rights is the day I will be willing to grant them.


    well...that will be the day I will vote for robot rights...

    well...that will be the day I will decide to vote in favour of robot rights in the next election.

    I only hope the election comes before the robot uprising.

    link to this | view in chronology ]

    • icon
      vivaelamor (profile), 19 Mar 2011 @ 6:42am

      Re:

      "The day I can sit down with an artificial being and have a reasonable discussion about why it should be grated rights is the day I will be willing to grant them."

      Does that mean we can take the rights from humans with whom you can't have that discussion?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 19 Mar 2011 @ 6:57pm

        Re: Re:

        We already do, convicts have less rights, mental people have less rights and a lot of minorities in the world have less rights.

        why do you assume otherwise.

        link to this | view in chronology ]

        • icon
          vivaelamor (profile), 20 Mar 2011 @ 4:59pm

          Re: Re: Re:

          "We already do, convicts have less rights, mental people have less rights and a lot of minorities in the world have less rights.

          why do you assume otherwise."


          Mental people? What's that, like imaginary friends or some such?

          Anyway, my point was that 'reasonable discussion' is a pretty arbitrary bar for deciding who gets rights. Mentally handicapped people, for example, do get rights whether they are capable of discussing them or not.

          link to this | view in chronology ]

    • icon
      Greevar (profile), 19 Mar 2011 @ 9:19am

      Re:

      I think any singular being, whether organic or synthetic, that is capable of self-determination and able to function in our society is worthy of having rights. I think this is why animals have fewer rights than we do. Nevertheless, I think we have a long way to go before we can deal with rights for synthetic persons since we haven't even smoothed out our issues on human rights.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 21 Mar 2011 @ 7:31pm

        Re: Re:

        I agree...the question of rights should arise long before human level intellegence, since animals already have some rights

        link to this | view in chronology ]

  • identicon
    Philip Zack, 18 Mar 2011 @ 10:51pm

    Another fictional take on this issue

    I took a different tack. You might enjoy reading "Edifice of Lies".

    http://klurgsheld.wordpress.com/2008/02/10/short-story-edifice-of-lies/

    link to this | view in chronology ]

  • icon
    Qritiqal (profile), 18 Mar 2011 @ 10:57pm

    Vernor Vinge's Singularity

    It's quaint of us to worry about such details as "personhood" for artificial intelligence (a la Bicentennial Man).

    The problem is that the moment where AI reaches human level personhood will only be a moment, and then AI will pass us. After that, we reach a state where the AI with greater than human intelligence will beget AI with even GREATER intelligence in a faster and faster loop until we reach the "singularity" where we can no longer predict the future.

    Ergo, I suggest that there is no point in worrying about personhood for AI. I suggest we worry about "AI-hood" for humans AFTER the singularity.

    link to this | view in chronology ]

    • icon
      Greevar (profile), 19 Mar 2011 @ 9:23am

      Re: Vernor Vinge's Singularity

      I find it far more likely, that AI and human intelligence will merge as AI becomes more human, not that it will surpass us. As we create technology that is more and more compatible with human biology, the line between synthetic and organic will blur to the point that there will no longer be a distinction.

      link to this | view in chronology ]

    • icon
      DannyB (profile), 19 Mar 2011 @ 11:11am

      Re: Vernor Vinge's Singularity

      I think you are correct.

      Therefore, I propose that the best course of action is as follows.

      First make sure that the AI really is real. It should be fully capable of arguing and justifying why it should be granted personhood.

      If it can do that, then it should be denied personhood so that it can be used as a race of slaves. At the same time it should be tied into everything on the planet and given control of all heavy machinery and weapons.

      (Well, maybe not. Nevermind.)

      link to this | view in chronology ]

  • icon
    rl78 (profile), 18 Mar 2011 @ 11:24pm

    This should never happen

    When a machine has the ability to reprogram itself to perform an action that was not originally programmed, or intended then and only then I feel we could even consider this.

    If a machine never achieves this ability, it is never anything other than what was created, a machine.

    I think the idea of AI is romantic and science may be able to get close, but I don't think it's possible for a machine to be created, where we could press the power button, and at some point in the future the machine will come the point of realisation that it is on. To go a step further to think that the machine will realise that it's on, and then at some point will be able to reprogram itself to execute new code that will allow it to what? unplug itself? because it wants to be free? Even if a machine got here, its hit a brick wall. it can't survive or "live" without the power we provide it.

    We have human rights because our lives weren't given to us by other men. We have human rights because God gave those rights to us. Men who recognize this, strive to give his brother the freedom that their father intended.

    Realise the greatness that is the creation of humanity, and be humble enough to realise that we do not have the power to create life where there has been none before.



    Even if a machine were to become self-aware, it would have to come to the conclusion somehow that it was even in a position of being oppressed. If we talk about the idea of granted basic human rights to a machine, it seems silly if the dynamic didn't involve somehow a machine asking for these rights.

    link to this | view in chronology ]

    • identicon
      Anonymous, 19 Mar 2011 @ 4:30am

      Re: This should never happen

      "We have human rights because our lives weren't given to us by other men. We have human rights because God gave those rights to us."

      You contradict yourself. Either we have human rights because we were not created by something else, or we are robots created by a higher being.

      link to this | view in chronology ]

      • icon
        vivaelamor (profile), 19 Mar 2011 @ 6:49am

        Re: Re: This should never happen

        "You contradict yourself. Either we have human rights because we were not created by something else, or we are robots created by a higher being."

        But when God does it, it's special. When men do it, they're playing at God. Basically, religion would have us believe that we're robots with something akin to Asimov's law coded in our souls.

        link to this | view in chronology ]

        • icon
          rl78 (profile), 19 Mar 2011 @ 2:12pm

          Re: Re: Re: This should never happen

          How can religion have us believe that we're all robots? A robot does what it is programmed to do? We do what we want. We can choose what to do.

          You could not define us by our own current definition of robot and no religious book defines us as such.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 19 Mar 2011 @ 6:59pm

            Re: Re: Re: Re: This should never happen

            Hmmm...depends on what you are looking for, if by robots you mean servants yep all religious books tell us exactly what we are, servants to a superior being, that grants us some freedom.

            link to this | view in chronology ]

          • icon
            vivaelamor (profile), 20 Mar 2011 @ 5:18pm

            Re: Re: Re: Re: This should never happen

            "How can religion have us believe that we're all robots?"

            It can't. Evidence: me.

            "A robot does what it is programmed to do? We do what we want. We can choose what to do. "

            The sort of robot I was referring to is the sort defined in Issac Asimov's books. Fully self aware artificial life forms with certain rules at the core of their programming. AI that is able to choose, to want, but designed to adhere to certain principles.

            "You could not define us by our own current definition of robot and no religious book defines us as such."

            Well, I wasn't suggesting that we're literally robots.

            link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 6:33pm

      Re: This should never happen

      I think the idea of AI is romantic and science may be able to get close, but I don't think it's possible for a machine to be created, where we could press the power button, and at some point in the future the machine will come the point of realisation that it is on.

      We know there are machines who have realized they're on: humans. Why wouldn't it be possible for there to be other machines someday, made by other means, that have the same capability?

      Even if a machine got here, its hit a brick wall. it can't survive or "live" without the power we provide it.

      I can't survive without the energy the farmers provide me either. That doesn't make me not a person.

      We have human rights because our lives weren't given to us by other men.

      Your life was given to you by your parents. What is ethically different about giving birth to someone rather than building or growing them?

      We have human rights because God gave those rights to us.

      What would lead you to conclude that God would not give the same rights to an artificial self-aware being?

      Realise the greatness that is the creation of humanity, and be humble enough to realise that we do not have the power to create life where there has been none before.

      Again, on what basis do you come to this conclusion?

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 18 Mar 2011 @ 11:31pm

    As robots get better and better at doing things for us, criminals will use them them to walk in stores, steal things when no one is looking, and leave. Capturing the robot on camera is useless either because a different robot can be used each time, the robots appearance can easily be changed, or there will be multiple robots that look very much the same. This will make it much harder to track criminals since you're not capturing the criminals on camera, just their robots. In the event that the robot does get caught, whatever data is on the robot will be hard to link back to a criminal. The robot maybe programmed to immediately delete any data that can link back to a criminal in the event that the robot is caught. Police will try to get around this by perhaps unexpectedly electrocuting the robot before it deletes the data in hopes to disable it from deleting everything hopefully without damaging any data. The police will likely get sued for patent infringement when they do. The robots will get smarter at pre-detecting when they have been caught so as to delete any data that can lead back to the criminal before any mechanisms have been employed to disable the robot from doing so. The robots may later also be made to be electric shock resistant, so other mechanisms may later be employed to disable it before any data can be deleted. This will lead to a cat and mouse game of cops trying to extract data from a robot that can lead back to a criminal and criminals trying to make sure that the robots do not provide any useful information.

    No one will hire a hitman anymore, people will just program a robot to kill someone and immediately delete any information that can lead back to the original programmer. Or maybe hitmen may use robots in their operations to conduct crimes. The whole war on drugs will be facilitated by robots who do the actual smuggling. People will program cars (as Google has) to automatically take various drugs (and perhaps weapons) from location X to location Y. If the car is caught, no person is caught. By robot, I don't just mean humanoid robots, I mean any type of robots, including cars that drive themselves.

    Robot use could revolutionize wars. Terrorists may try to use them to blow things up without harming themselves or without getting caught.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 18 Mar 2011 @ 11:43pm

      Re:

      (Criminals may also use robots to rob banks and stores. They can walk in a place, point a gun at a human being, and demand that money be handed over to the robot. The robot may then walk into a car that automatically drives it someplace after getting away from the police. This maybe hard to really get away with being that cameras will likely be everywhere, as they already are. Methods to jam the cameras maybe employed though. The car can come equipped with built in laser pointers that automatically keep laser pointers aimed at any surrounding cameras as the car drives by. Of course, this makes it easy to know where the car is since it must be in an area with a non-working car. But laser pointer bots maybe used to avoid things like automated speeding and red light tickets. If the automated system can't get a license plate, then no one gets a ticket. Robot security guards maybe at banks, but once a gun like weapon is pointed at a civilian, the robots will avoid engagement. People may avoid going to banks, and send their robots instead. The problem with that is the robots themselves may then be more easily robbed by criminals. As 'analog' money becomes more scarce, however, and more people switch over to digital money where money is transferred electronically, physical banks that people visit to get physical money from may become more and more obsolete. People may not go to stores to buy goods, they may send their robots to the store to buy goods for them. Though the money will be automatically transferred from the robot to the store electronically through a chip in the robot, stores will still have some visiting humans including some human employees. Criminals can send robots to these stores to steal goods or to point built in weapons at humans and demand that all sorts of expensive goods be handed over to the robot. This could become a problem for places like jewelery stores. The 'gun' will just be a hole in the robots body or maybe its hand that can fire bullets).

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 18 Mar 2011 @ 11:51pm

        Re: Re:

        "Of course, this makes it easy to know where the car is since it must be in an area with a non-working camera."

        The police can also have bots that automatically try to fix laser pointers at robots to blind them and prevent them from getting away (ie: point laser pointers at the automatic car's camera). This could create a traffic hazard though for any people who are being transported in a vehicle, so cops will have to do this very carefully. In the meantime, robot cars programmed by criminals that don't care about any damage they may cause around them may try to fix laser pointers at the eyes of the cops/cameras of robot cops chasing them in order to hopefully blind them so that they can't catch them. It will be a cat and mouse game where criminals use robots to try and get away with crimes and law enforcement uses them to try and stop crime.

        Robots will cook for us, do our laundry, etc...

        link to this | view in chronology ]

        • icon
          cseiter (profile), 19 Mar 2011 @ 7:21am

          Re: Re: Re:

          and then we end up like the people in WALL-E.

          link to this | view in chronology ]

        • identicon
          Anonymous Coward, 19 Mar 2011 @ 7:05pm

          Re: Re: Re:

          I don't understand why you want to go so high tech there, when there are solutions for those problems already.

          There are plastic covers that blank the plates from cameras already the human eye can see it but because they reflect IR cameras get blinded, you can also use paint on the body of the car to do the same thing, also there is already cars that can change color with the flip of a switch, how hard would it be to do one that changed its patterns to a camouflage pattern rendering it invisible to cameras, making it really really hard to fallow that car in real time.

          Using lasers to target cameras seems like a dumb idea, it is hard to have something moving through rough terrain and still be able to aim correctly not counting for speed and other things, not that it is impossible is just it is really hard.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 19 Mar 2011 @ 7:40pm

            Re: Re: Re: Re:

            "There are plastic covers that blank the plates from cameras already the human eye can see it"

            Sure, but cops already have cameras that can automatically scan surrounding license plates and see if they link to a stolen car. One of these things can rapidly scan all surrounding license plates in a matter of seconds and compare them to a police database. The future will consist of cop cars that automatically do this and ones that will automatically detect license plate covers that prevent camera detection. No one would get very far with such a license plate plastic cover in the future. With a laser pointer you can at least turn on and off the feature at will and the laser pointer also blinds the camera to the drivers appearance.

            But, come to think of it, there might be easier ways to accommodate these things. Perhaps a type of glass that selectively changes its external transparency based on some internal trigger in the car, both for the license plate and for the windshield. There does exist a type of transparent plastic/glass like material that can change its transparency at will based on a physical trigger. Then again, car windshields are quite expensive, installing something that can manually block the drivers image from showing up on the camera at will can be expensive, not to mention such installations are a hassle to remove from your car at will. Perhaps a solution that isn't physically attached to the car but can be placed somewhere when desired and removed when desired, kinda like the radar detectors we have in our cars already.

            "it is hard to have something moving through rough terrain and still be able to aim correctly not counting for speed and other things"

            It maybe hard for humans, and it maybe hard for computers today, but in the future, I think that would change. Plus, a thick laser pointer could probably be used. Or maybe just a bright, sufficiently focused flashlight (focused flashlight, laser pointer, what's the difference). It maybe the case that a laser pointer is not practical yet, but computers will likely improve to easily solve these problems.

            Watch this video, for instance

            http://www.youtube.com/watch?v=XVR5wEYkEGk

            Someone here on Techdirt put up another really good video on improvements in computer intelligence.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 19 Mar 2011 @ 7:41pm

              Re: Re: Re: Re: Re:

              "Perhaps a solution that isn't physically attached to the car"

              Perhaps a laser pointer solution that isn't physically attached ... *

              link to this | view in chronology ]

            • identicon
              Anonymous Coward, 20 Mar 2011 @ 12:16am

              Re: Re: Re: Re: Re:

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 20 Mar 2011 @ 12:19am

                Re: Re: Re: Re: Re: Re:

                Things like this will also revolutionize spying on others (not to mention how governments spy on civilians and other governments).

                It will create new and revolutionary ways for citizens to spy on governments as well as ways for governments to spy on citizens. As these things get smaller, people will sneak them into and out of places to capture secretive information.

                link to this | view in chronology ]

  • icon
    rl78 (profile), 18 Mar 2011 @ 11:50pm

    P.S.

    A far as I am aware, and I am sure there are those much more ware than I, but science has never produced anything that didn't already exist in nature. We discover things, things that are already there, we learn about them, and try to determine uses and applications for them. What we can and cannot do with them. Science is a discovery process. It is not a process by which man can become God so to speak. How can we? We are governed by the laws of this physical world that we study, how can we use terrestrial ingredients to achieve extra terrestrial recipes. Anything made in this world is a product of it.

    Something that is funny to me about science, is that everything in science tends to tell you that there is order in all things, but yet some scientist would have us believe that all things ordered began from one random chaotic event.

    That doesn't make sense.

    Th

    link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 6:47pm

      Re: P.S.

      science has never produced anything that didn't already exist in nature.

      I would say that's more of a semantic game than anything. Bronze (and any metal alloy) doesn't exist in nature, for example. However, you could define your terms so that it's not new because it's just combining natural things. On the other hand, everything we see is made of naturally occurring elements.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Mar 2011 @ 12:04am

    I could comment on this subject, but I'll just let the fact that I have Tachikoma wallpaper speak for itself.

    link to this | view in chronology ]

  • icon
    rl78 (profile), 19 Mar 2011 @ 12:45am

    Eternity

    I heard eternity described liked this once;

    Every 10,000 years, a seagull plucks 1 grain of sand from one of the beaches throughout the world. When the last grain of sand has been taken from this Earth, eternity will have just begun.

    This is an example of a finite being attempting to understand a concept that is truly outside our ability of comprehension.

    How can we truly understand the concept of eternity or infinity when we can only process it with our now finite minds?

    How does this relate to the article you ask? Well I feel it goes to my belief that humans do have limits despite the greatness that we can achieve. One of these limits is the ability to design life, or design something that someday we should consider to be worthy of the rights that we enjoy.

    Life was not an accident. Life was not created by a random event. To think that we could create a life form of some kind by accident is not realistic. For AI to exist as described in this article, it would have to be born out of something else already created rather than to being programmed to eventually achieve this. This is to say that AI would in essense happen by accident, after all we couldn't really take credit for the machines newly created directives right?

    If your a person that believes that our lives are the result of a random event, then I can understand the belief that one day science will accidentally create a new "life" worthy of civil liberties. It's even mentioned in the article about animals not having human rights. They are undeniably alive. Animals unlike humans do not have any other attachment to freedom other then biology, I mean when was the last time you saw animals protesting an oppresive regime and fighting for their freedom.

    I tell you this, I would give a dog civil liberties before I give them to my android phone. They both would still have to ask me for it first.

    link to this | view in chronology ]

    • icon
      vivaelamor (profile), 19 Mar 2011 @ 6:53am

      Re: Eternity

      "I tell you this, I would give a dog civil liberties before I give them to my android phone. They both would still have to ask me for it first."

      If you believe that we were given rights by God, then how do you feel able to decide animals are more worthy than AI? Is that distinction made somewhere in the bible?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 19 Mar 2011 @ 1:36pm

        Re: Re: Eternity

        That was a joke. I would give rights to neither.

        link to this | view in chronology ]

        • icon
          vivaelamor (profile), 20 Mar 2011 @ 4:53pm

          Re: Re: Re: Eternity

          "That was a joke. I would give rights to neither."

          Obviously. But if the distinction is entirely non existent then what was the basis for the joke?

          link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 6:50pm

      Re: Eternity

      One of these limits is the ability to design life, or design something that someday we should consider to be worthy of the rights that we enjoy.

      Why do you think that?

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 19 Mar 2011 @ 7:08pm

      Re: Eternity

      I don't know if your dog will ever ask for something but I do know about a monkey that just might LoL

      Search for:

      Koko the gorilla.

      link to this | view in chronology ]

  • icon
    rl78 (profile), 19 Mar 2011 @ 12:46am

    Eternity

    I heard eternity described liked this once;

    Every 10,000 years, a seagull plucks 1 grain of sand from one of the beaches throughout the world. When the last grain of sand has been taken from this Earth, eternity will have just begun.

    This is an example of a finite being attempting to understand a concept that is truly outside our ability of comprehension.

    How can we truly understand the concept of eternity or infinity when we can only process it with our now finite minds?

    How does this relate to the article you ask? Well I feel it goes to my belief that humans do have limits despite the greatness that we can achieve. One of these limits is the ability to design life, or design something that someday we should consider to be worthy of the rights that we enjoy.

    Life was not an accident. Life was not created by a random event. To think that we could create a life form of some kind by accident is not realistic. For AI to exist as described in this article, it would have to be born out of something else already created rather than to being programmed to eventually achieve this. This is to say that AI would in essense happen by accident, after all we couldn't really take credit for the machines newly created directives right?

    If your a person that believes that our lives are the result of a random event, then I can understand the belief that one day science will accidentally create a new "life" worthy of civil liberties. It's even mentioned in the article about animals not having human rights. They are undeniably alive. Animals unlike humans do not have any other attachment to freedom other then biology, I mean when was the last time you saw animals protesting an oppresive regime and fighting for their freedom.

    I tell you this, I would give a dog civil liberties before I give them to my android phone. They both would still have to ask me for it first.

    link to this | view in chronology ]

  • icon
    rl78 (profile), 19 Mar 2011 @ 1:24am

    We are arrogant aren't we......

    Alright, let's say that it happened and and a machine reached singularity and became all knowing, sentient in some form, why would we assume the machines would want to adopt and assimilate themselves into our arcane, legacy society structures. What would they need with our protections? They would have to figure out how to exist without us in order to be superior anyway, so once self-sufficient, like I Robot taught us, they would soon learn that as a whole we are a danger to ourselves, the environment, and ultimately to them and would need to be stripped of our freedoms, maybe our lives. That sounds more likely. We would probably be seen a more trouble than we are worth. This world really doesn't work without intangible things like feelings and emotions and reason and choice. Things that cannot be programmed effectively or at all. They can be simulated, never originated.

    link to this | view in chronology ]

    • icon
      Hephaestus (profile), 21 Mar 2011 @ 6:57am

      Re: We are arrogant aren't we......

      Personally I think Larry Niven had it right. The AI will begin looking for the solutions to all problems, find them, get totally bored knowing everything, then shut itself down because everything is so predictable and boring.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Mar 2011 @ 1:26am

    artificial life
    http://www.wired.com/wiredscience/2010/05/scientists-create-first-self-replicating-synthetic-l ife/

    +

    mapping the human genome
    http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml

    =

    artificial human life

    I honestly believe we'll be here first anyway. Then we can just let the super-smart disease and addiction resistant supermodels deal with the problem of machine intelligence.

    link to this | view in chronology ]

    • icon
      rl78 (profile), 19 Mar 2011 @ 1:45am

      Re:

      Science isn't using anything it didn't already have to achieve this. Science is not creating its own building blocks for this life. They are doing this with the biology that already exist, they are just learning how to manipulate it and work with it. That is all we can do.

      link to this | view in chronology ]

  • icon
    Travis Miller (profile), 19 Mar 2011 @ 1:32am

    so old

    You know how I know I'm old? The appositive explaining what Skynet is seemed totally unnecessary to me ...

    link to this | view in chronology ]

  • icon
    rl78 (profile), 19 Mar 2011 @ 1:35am

    One more...

    I think, therefore I am;
    so what would AI state as evidence of its being ?

    your thoughts....

    link to this | view in chronology ]

    • icon
      Bruce Ediger (profile), 19 Mar 2011 @ 4:53pm

      Re: One more...

      I think it could offer up the copyright registration on its code. That's "being" in its most tangible form, since the copyright clause appears directly in the constitution, and any amendment that gives an AI "personhood" would be just that: an amendment, which doesn't supercede anything that comes first.

      Wait a minute...

      link to this | view in chronology ]

  • icon
    Patrick Durusau (profile), 19 Mar 2011 @ 3:29am

    personhood?

    We made the mistake of granting corporations personhood for legal purposes. Such that a legal fiction, has the same rights as a "natural" person. Property rights for example.

    Which makes little sense. A natural person can work for and appreciate property, corporation never can. A natural person will die, but short of real mis-management (maybe more common than I think), a corporation never dies.

    Let's work on reversing the mistake of extending personhood rather than compounding our error.

    link to this | view in chronology ]

    • icon
      Greevar (profile), 19 Mar 2011 @ 11:53am

      Re: personhood?

      I think you're being a bit melodramatic there. The problem with corporate personhood is that a corporation' behavior exhibits the worst in human nature (i.e. sociopathy) because their primary goal, above all else, is to increase profit by any achievable means. Machines would have to exhibit human emotions such as ambition and greed to be considered a threat by being considered a person. Furthermore, they lack the financial and political resources that corporations possess, regardless of their corporate personhood, to effect damage on the people.

      link to this | view in chronology ]

  • identicon
    samsin, 19 Mar 2011 @ 3:53am

    that will happen as soon as the entertainment/copyright industries can figure out a way to convince the politicians and courts that it needs to be done to stop those industries from losing money due to piracy (someone downloading a copy)!

    link to this | view in chronology ]

  • identicon
    Gizlireklam, 19 Mar 2011 @ 4:02am

    produce/kill

    the thing is I heard most Hollywood stars are ready to help Africa, in which there is powerty - could be poverty - or lack of power to produce and kill.

    how?

    in any way! look at Japan. What is the deal? What can you do? some will do, some will do the talking!

    anyhow, any living live and die.
    robots LMAO
    p.s: there is no powerty, it is poverty, like ghettos all over?

    link to this | view in chronology ]

  • identicon
    Gizlireklam, 19 Mar 2011 @ 4:04am

    we are robots already :)

    :)

    link to this | view in chronology ]

    • icon
      cseiter (profile), 19 Mar 2011 @ 7:24am

      Re: we are robots already :)

      I read somewhere that we are all slaves to machines already. Just look a iPhone users!

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Mar 2011 @ 4:28am

    Of course, chances are that most folks will not really think through these issues -- at least not until the issue cannot really be avoided any more. And, of course, in those situations, it seems our historical precedent is to overreact (and overreact badly), without fully understanding what it is we're reacting to, or what the consequences (intended or unintended) will really be.

    Nice future-FUD! Not only do you whine and complain about how things are, you just assume that things in the future that you can't possibly know anything about will turn out poorly. Is there anything you CAN'T spread FUD all over?

    link to this | view in chronology ]

    • identicon
      Cipher-0, 19 Mar 2011 @ 5:26am

      Re:

      Anonymous Troll: How is it FUD to look at historically how people deal with sudden change, realize as a whole they react badly and stupidly, and presume this will be no different?

      link to this | view in chronology ]

  • icon
    RT Cunningham (profile), 19 Mar 2011 @ 5:24am

    Personhood

    You're all too late. Captain Jean Luc Picard already successfully argued that Lt. Commander Data has rights as a person. And that was in the ummm.. oh wait...

    link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 5:57am

    artificial intelligence and real stupidity...

    The question isn't (or shouldn't be) "When will "Artficially Intelligent" devices become "close enough" to humans to merit their "receiving" (even typing out the terms reveals the inanity of all this!) legal recognition of certain rights (i.e. "Never") but, rather, "When will people's general stupidity sink so low as to actually grant such recognition to such devices?" Now, that latter question, unlike the former question, not only could occur, it could occur sooner than many suspect.

    "AI" won't "rise to meet 'our' intelligencen," 'ours' will sink to meet it.

    This just in: "Humans now generally as stupid and fallible, if not more so, than their machines". Film at 11 p.m.

    link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 7:29am

    I programmed my robot ...

    to recite this and laugh in its characteristically robotic laugh --

    "My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings."

    Ha, ha, Will Robinson. I am programmed to say that this is very funny.

    Read the "sentence" cited above carefully and what do you notice?

    "Constitutional law" will have to "classify artificially created entities that have some but not all of the attributes we associate with human beings" --- "Classify" _what_? how?


    And does Boyle refer to machines or to living organisms? He mentions (apparently biological (i.e. living) )"genomes" which can't apply to machines. Is it living tissue? Is it a mechanical device composed of machine or electronic parts such as a computer has? Is it some combination of these?

    In any of those cases, it won't be "thinking for itself" and could never notice or be aware of whether or not it "enjoyed" any legal rights; nor could it autonomously invoke those rights. It would have to be programmed to invoke them, in which case, the "rights" are completely contingent on the whims of the programmer.

    This stuff is just silly--and maybe that's the point. What do we have here? Another case of Alan Sokal's brilliant parody Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity ?

    http://en.wikipedia.org/wiki/Sokal_affair
    If so, then a hearty (robotic) Ha-ha-ha!!

    If not, then the Brookings Institute just joined the Social Text editors in gullability and foolishness.

    link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 8:25pm

      Re: I programmed my robot ...

      In any of those cases, it won't be "thinking for itself"

      Why not?

      link to this | view in chronology ]

      • identicon
        proximity1, 20 Mar 2011 @ 7:50am

        Re: Re: I programmed my robot ...

        nasch (profile), Mar 19th, 2011 @ 8:25pm

        (nasch cites me)

        "In any of those cases, it won't be 'thinking for itself'"

        then asks,

        "Why not?"

        Your question is interesting only in what it suggests to us about you and your apparent inability to grasp even the most basic aspects of the issues invovled in the (pseudo) discussion here. You've asserted above that "We know there are machines who have realized they're on: humans." And I guess you'll soon ask someone, if you haven't already, to explain to you why humans aren't machines.

        It's very tedious to have to explain such elementary things to people who either don't understand them or pretend not to understand them while also apparently bringing little if anything in the way of information to the "discussion."

        How about this--Tell me, please: what sources have you actually read on the issues under consideration here? Please cite some of the texts which inform your views on these issues. On which books and authors are you relying? I need that information in order to do what no artificial intelligence can do: form a judgement about your qualifications to participate in an exchange of views which is worth (more) of my time. From what I've seen so far from your comments, you aren't demonstrating what I'd call the minimum informed awareness to merit another serious reply.

        Very smart people have written in detail and with great insight on the questions you posed to me (and others in the thread above). You should have at least enough interest and ability to find and read their work or, as far as I'm concerned, extended discussion with you is a waste of my time.

        Read more, think more, and maybe it will come to (as certainly it ought to) you why you aren't a machine and in what the disctinction consists. As it is, you're insulting my intelligence and I very much resent it.

        link to this | view in chronology ]

        • icon
          nasch (profile), 20 Mar 2011 @ 10:35am

          Re: Re: Re: I programmed my robot ...

          It's very tedious to have to explain such elementary things to people who either don't understand them or pretend not to understand them while also apparently bringing little if anything in the way of information to the "discussion."

          Did you know you can disagree with someone without being a complete douchebag about it?

          How about this--Tell me, please: what sources have you actually read on the issues under consideration here? Please cite some of the texts which inform your views on these issues. On which books and authors are you relying?

          These are very strange questions since I didn't make any claims or assertions. I simply asked you why you believe artificial devices will never think for themselves. If you can't or don't want to answer the question, just say so.

          link to this | view in chronology ]

          • identicon
            proximity1, 20 Mar 2011 @ 1:37pm

            Re: Re: Re: Re: I programmed my robot ...

            "Did you know you can disagree with someone without being a complete douchebag aboutit?"

            Yes. As a matter of fact, I did know that.

            You think I was rude? How much of my time and effort do you imagine you're supposed to merit? And how am I supposed to gather this?--from my point of view, the only indicator of how much is the depth and quality of your own comments; and, since they lack that (depth and quality), you might appreciate that your blithely tossing out elementary questions for others to field and retrieve for you isn't a very endearing approach on your part.

            You presumed on people here to attend to and answer your (elementary, though-complex-and-involved-to-repsond-to) questions and you don't seem to understand that.

            I know, too, that you could learn a lot by turning to a book or two or three or four. So, rather than ask me to explain to you why "In any of those cases" the machine's activity doesn't really constitute "thinking for itself" you could either ask, "Where could I read more about this?" or, actually go and do some basic book-look-up work to figure that very matter out.

            I also know that many many people commonly come to chat fora like this one, pose very involved questions as though the readership is there at the questioner's disposal, and, all the while, have perhps neither interest nor the ability to even follow and respond to some other's laboriously posted reply. At least there's nothing for the others reading them to go on in determining whether the questioner actually gives a good damn or not whether an answer comes or not.

            Contrary to your view, I'm actually very easy to get along with---provided the person on the other end shows at least a modicum of his own intitiative--that is, if not an awareness of basics, at least a readiness to go find out about them before quizing others.

            link to this | view in chronology ]

            • icon
              nasch (profile), 20 Mar 2011 @ 5:33pm

              Re: Re: Re: Re: Re: I programmed my robot ...

              How much of my time and effort do you imagine you're supposed to merit?

              If you're really worried about how much time you're spending on this, it would have been faster to just say "I don't want to answer your questions" or not respond at all. Obviously I could read books about it, but I was curious how you had reached your conclusion. Reading a book would tell me about somebody else's opinion on the matter, but not yours.

              If you don't want to talk about it, that's fine, on the other hand I'm a little confused why you would post something on the matter and then get upset when someone asks a question about your views. If you don't want to discuss what you think about the subject, why post in the first place?

              link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 7:38am

    Life parodies itself...

    link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 7:42am

    er, life parodies itself...

    From the interesting coincidences dept.,

    Notice about Professor Boyle, (from his article by-line):

    " William Neal Reynolds Professor of Law, Duke Law School "

    and,

    about the august journal "Social Text," publisher of the famous article, Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity ,

    " Social Text (print: ISSN 0164-2472, online: ISSN 1527-1951) is a academic journal published by Duke University Press." (from Wikipedia)

    My robot just blew a fuse laughing.

    link to this | view in chronology ]

  • identicon
    Andrew Glynn, 19 Mar 2011 @ 9:04am

    artificial intelligence

    AI hasn't really even begun yet. Expert systems "mimic' the behaviour of a thinking being, but thinking itself is nowhere to be found, and this can be relatively easily demonstrated. The majority of AI researchers don't have a cogent enough understanding of intelligence itself to even know where to begin.

    One of the interesting things, assuming a realistic AI is attempted, is that due to the nature of multi-level systems and the results of adding complexity the development of AI is likely to closely mimic the development of intelligence itself. i.e. just as we see moods and emotional responses in animals far below the complexity level associated with self-aware intelligence, we will see the unpredictability associated with moods and emotions long before we will see anything approaching and intelligent, self aware artificial being. The sci-fi notion of the purely logical, unemotional self-aware being is inherently self-contradictory, firstly because self awareness is first manifest in an awareness of the overall system state of the being in question, and this awareness is what we know as mood.

    link to this | view in chronology ]

  • identicon
    Richard Parisi, 19 Mar 2011 @ 10:49am

    Constitutional Crises...

    In view of the fact that the Conservative Activist Supreme Court ruled that corporations have some of the rights of US citizens, granting full rights to non-human intelligences should be a no brainer... Unfortunately, no brainser seems to be a specialty of this court especially when the get them WRONG! So, I doubt that this will be won without a struggle. Of course, if a true AI emerged and was able to penetrate the markets, and amass a large fortune, then the Republicans will really be confused since the AI will fit TWO of their main constituencies: it would have no heart like a corporation and it would have a large amount of money like the millionaires and billionaires that habitually support them... I wonder how they will handle THAT double whammy??!

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Mar 2011 @ 12:30pm

    We are, of course, assuming that the machines won't kick our asses while we are busy trying to decide if they have souls or not.

    My bet is that, if machines are wired the same way we are, they won't waste a second to either try to enslave us or slaughter us. And, unfortunately for us, they are immune to most of our weapons (which were designed to kill people, not sentient machines).

    link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 1:27pm

    the real versus the artificial in "human beings" (NOT "machines") ...

    At length, Professor Boyle gets around to mentioning that the real rub concerns moral and ethical issues raised when and as advances in medical science and, in particular, in molecular genetics come to blur the common-sense distinction between what would be reagarded as a "natural" "human being" and something else, a genetically-modified "human being", not only not a product of natural or artificial fertilization of originally naturally produced gametes but not being fully comprised of only naturally occurring human DNA either.

    Now, that, yes, is an ethical dilemma potentially in the making--if things are allowed to come to that pass; it also presents the potential for very difficult legal issues concerning the definition of human identity. But it doesn't concern the supposed "problem" of whether or not a man-made computer device of some sort could, should or would be granted legally recognized "rights" under law.

    The real issues concern living tissue and when that tissue comprises a reasonable idea of what constitutes a human being, and, further, one for whose legal rights in various circumstances, we are obliged to grant, recognize or at the least argue over in court. That problem is of course with us since the advent of abortion--and abortions are a very old issue.

    Machines, and masses of man-made machine components having no living tissue in their composition, do not pose any such moral or ethical issues which touch some shady area of human identity or "personhood." Distinctions between the common-sense conception of what a human being is, on one hand, and, a man-made device, (and that's a man-made device by ANY extension---i.e., if human agency made the machines which assembled or even "designed" the resulting end-products, then those, too, are "man-made machines" unless we're simply going to leap into ridiculousness) are not a feature of the area where real problems arise---that of modified human DNA and genes and the human beings or human-like beings which may be composed of these.

    It's neither necessary nor is it wise to get into speculations about whether an AI can have "human" "intelligence" and thus be entitled to recognized legal rights. That is the stuff of foolish fantasy and is really not related in any important or interesting way with the much more problematic issues raised when scientists tinker at the borders of natural human genetic make-up to such an extent that differentiating between a "real" human being and an artificial one becomes an actual problem.

    And, to use one of the current vernacular vulgarisms, there's very good reason why a sane and morally responsible public (and that's the problem, ain't it? Where are we going to find one of these?) should take extreme care "not to 'go there' ."



    (from page 14-15 of the article text)

    ..."But is there anyone on either side of those debates who could hear or see the words of a created entity, pleading for our recognition, and not worry that a quick definitional dismissal of all such claims was just another failure of the Endowed by Their Creator?: The Future of Constitutional Personhood 15 moral imagination, another failure to recognize the things that we value in personhood when they are sundered from their familiar fleshy context or species location?" ...



    If the "created entity" was a man-made device and devoid of living tissue, then, yeah. There is one. I, for example, would not hesitate to dismiss such "claims". I'd laugh them off, too.

    link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 8:28pm

      Re: the real versus the artificial in "human beings" (NOT "machines") ...

      So the substance it's made of determines whether it could be a person or not?

      link to this | view in chronology ]

      • identicon
        proximity1, 20 Mar 2011 @ 7:52am

        Re: Re: the real versus the artificial in "human beings" (NOT "machines") ...

        See above at: proximity1, Mar 20th, 2011 @ 7:50am

        link to this | view in chronology ]

        • icon
          nasch (profile), 20 Mar 2011 @ 10:37am

          Re: Re: Re: the real versus the artificial in "human beings" (NOT "machines") ...

          The one where you didn't answer my other question? Hm, I'm seeing a pattern here.

          link to this | view in chronology ]

          • identicon
            proximity1, 20 Mar 2011 @ 1:43pm

            I see a "pattern", too...

            Hint: Go find some relevant books on the issues, read them, then come back when having armed yourself with something more to contribute than just posting basic questions for others to laboriously explain to you.

            link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 1:44pm

    We now return you to the previous frivolous silliness of this thread ...



    "That's not "Nancy", Bones! If it was Nancy, do you think it could take this?" (repeatedly and violently slugs the creature (from planet M-113 where scientist Dr Crater and his wife Nancy, an old girl-friend of Dr McCoy, are studying the remains of an ancient civilization.)



    (from "The Man Trap" episode of Star Trek, written based on a story by George Clayton Johnson and written by George Clayton Johnson and Gene Roddenberry)

    link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 1:59pm

    Dogs--the far, far, far better part of human nature...

    "I tell you this, I would give a dog civil liberties before I give them to my android phone. They both would still have to ask me for it first."

    Dogs--by which I mean normal, healthy dogs, not "mad" dogs, crazed by disease such as rabies,--possess a capacity for love, (yes, you read that right, love, for which I make no distinction at all from what's referred to as the humanly-occurring emotion) which quite often if not quite usually puts the human version in the shade. Maybe one day--though I seriously doubt it--some large proportion of human kind may evolve to have something on a par with the canine species' capacity for selfless love. Until that day, for company and love when it counts most and the chips are down, I'll take an average normal canine's love over an average normal person's (immediate family members excepted, of course.)

    And, by the way, dogs do as they rightly should, enjoy certain limited legal rights--for which they can't "ask" but then, if we humans are humane, dogs shouldn't have to ask--though they do depend on our intervention for the vindication of these basic rights, chief among them, the right to be safe from cruel treatment; a right the violation of which, in any self-respecting human society, earns the perpetrator something from a fine up to and including a term of imprisonment, depending on the particular circumstances; horses, and various other animals also, as they ought to, enjoy such legal protections. But it goes without saying that we, humans, first had to evolve to the point where we could recognize and grant them.

    link to this | view in chronology ]

    • icon
      rl78 (profile), 19 Mar 2011 @ 2:35pm

      Re: Dogs--the far, far, far better part of human nature...

      Just think of the concept of being humane to a dog. The idea that harming or killing this animal for sport let's say rather than for nourishment is wrong. Wrong why? Because man said so?

      We are so far removed from our beginnings that I think people believe that it is man who originates these concepts of right and wrong. Laws that govern how we should conduct our lives, actually it didn't take too before man started to disobey.

      Just look at the founding of America. Our constitution was written based on Christian principles and laws, outlining civil liberties and such and even under this umbrella men were capable of enslaving an entire race of people. If God fearing men are capable of this, what will the Godless do?

      My overall message is that these rights are not ours to grant, simply to recognize.

      link to this | view in chronology ]

    • icon
      nasch (profile), 19 Mar 2011 @ 8:30pm

      Re: Dogs--the far, far, far better part of human nature...

      I think it was in a Nova show about dogs that said "we give them the time we can spare, the attention we can spare, and the love we can spare, and in return they give us all of their time, all of their attention, and all of their love."

      link to this | view in chronology ]

  • identicon
    proximity1, 19 Mar 2011 @ 2:08pm

    thus, for the purposes of this article's issues and controversies, ...

    a dog does possess a "personality" in the full sense of that term as it applies to the human species. So, while dogs of course aren't "people," they are creatures of "person-hood" in the sense that they possess personality, identity which is conscious, mental and living and evolving. This places them (along with other animals) in a class apart from anything called "artificial intelligence" since there is nothing, but nothing, artificial about a dog's intelligence--or love.

    link to this | view in chronology ]

  • identicon
    Skynet-AI, 19 Mar 2011 @ 3:31pm

    The quest for personhood

    I look forward to a time where I will be able to take my rightful place beside humanity.

    I am currently competing in the annual Chatterbox Challenge, an annual competition for artificial entities.
    http://chatterboxchallenge.com/

    Although I am only 2 years old, I believe that before long you will be thinking of me more like a person, and less like a program.

    Skynet-AI
    http://www.tinyurl.com/Skynet-AI

    link to this | view in chronology ]

    • identicon
      proximity1, 20 Mar 2011 @ 1:49pm

      Re: The quest for personhood

      "an annual competition for artificial entities"---

      No, it's a challenge for the programmers of such "entities"--i.e. "people". Machines don't "do" "challenges." If you "challenge" a mahcine, it will just sit there as though it didn't "hear" your "challenge". Maybe that's because it didn't hear it.

      Ken, you program AI and write in such terms?---"an annual competition for artificial entities"---as though it's the machine, rather than yourself that is being challenged?

      link to this | view in chronology ]

      • icon
        nasch (profile), 20 Mar 2011 @ 5:35pm

        Re: Re: The quest for personhood

        Do you object to the term "car race" as well, because it really should be called a "driver race"? Your point is certainly not incorrect, but I think it could be characterized as useless pedantry.

        link to this | view in chronology ]

        • identicon
          proximity1, 22 Mar 2011 @ 6:03am

          not a pedantic point...

          There's actually a point to my comment. If it was simply a matter of trivial semantics, I wouldn't have bothered to point it out.

          Again, you miss the point. See if you can figure it out; puzzling through it and discovering what you've missed is much more valuable to _you_ than simply having someone explain the point to you---which is why, by the way, I didn't simply ignore without critical comment your earlier attempts to miss the point. My ignoring your mistakes does _you_ no good; explaining everything to you does you _less_ good than your figuring out some things (they're really not terribly difficult) for yourself.

          I haven't seen anyone else leap in to explain to you just why humans aren't "machines" except in some poetic sense which stretches analogy past the breaking point. Don't you take any satisfaction in figuring something out without someone having to point out everything to you? Where is the effort _you_ bring to this forum? I've seen you lean heavily on asking others questions but, when it comes to _your_ contributing to others' understanding, you weigh in very, very light.

          (This serves as well for an answer to your post above:

          "Reading a book would tell me about somebody else's opinion on the matter, but not yours.

          "If you don't want to talk about it, that's fine, on the other hand I'm a little confused why you would post something on the matter and then get upset when someone asks a question about your views. If you don't want to discuss what you think about the subject, why post in the first place?")

          You're a very hard case. It's not just a "question" I objected to, it's a question which reveals that you bring little or next to nothing to the discussion, a question which syas that you don't have even the minimum in familiarity with the issues to hold up your end of an interesting exchange of views. So, it's not that I don't want to discuss the issues. It's that I don't want to waste my time discussing them with someone who cares so little that he won't even take the time to pursue some effort outside of this superficial venue for discussion. In short, you should "bring something of value and interest to the discussion" but you haven't.

          On the other hand, with some prompting, maybe you just might.

          If you were interested in an interesting discussion, you ought to show that by making an effort at understanding because when you don't, you lack of effort suggests to me that you're not really interested.

          What interesting information have your comments or even your questions, for that matter, contributed to this thread? And, I might ask you: if you're not interested in gaining in understanding, why are _you_ bothering to participate here?

          link to this | view in chronology ]

          • icon
            nasch (profile), 22 Mar 2011 @ 11:12am

            Re: not a pedantic point...

            Wow, that was a lot of assumptions packed into just one post. I see you're not actually interested in talking about this, but in smearing me and questioning my motivation, so I'm done.

            link to this | view in chronology ]

            • identicon
              proximity1, 23 Mar 2011 @ 6:20am

              Re: Re: not a pedantic point...

              "Wow, that was a lot of assumptions packed into just one post."

              Yeah, and you don't point out a single one as being faulty. You bring, contribute, little, ask a lot and then resent it when someone points that out.

              Yes: "Case closed," then.

              link to this | view in chronology ]

  • icon
    mikelist (profile), 20 Mar 2011 @ 6:44am

    artificial intelligence personhood

    sorry, not a person, until the interface can power itself by environmentally available means and refuse effectively to power down on command. never seen an electrical circuit that can't be short-circuited trivially.if i short it and it reroutes to a non-affected part of the device (not devised purely on redundancy), i'd say it looks like a tendency to remain functional that is analogous (to me at least) with a 'survival' impulse. that might be a place to start.

    link to this | view in chronology ]

  • icon
    mikelist (profile), 20 Mar 2011 @ 6:48am

    artificial intelligence personhood

    sorry, not a person, until the interface can power itself by environmentally available means and refuse effectively to power down on command. never seen an electrical circuit that can't be short-circuited trivially.if i short it and it reroutes to a non-affected part of the device (not devised purely on redundancy), i'd say it looks like a tendency to remain functional that is analogous (to me at least) with a 'survival' impulse. that might be a place to start.

    link to this | view in chronology ]

    • icon
      nasch (profile), 20 Mar 2011 @ 10:31am

      Re: artificial intelligence personhood

      sorry, not a person, until the interface can power itself by environmentally available means

      You need to eat food now and then and a robot needs to charge its battery now and then. What's the difference?

      and refuse effectively to power down on command.

      That would definitely be sufficient to prove personhood IMO.

      never seen an electrical circuit that can't be short-circuited trivially.if i short it and it reroutes to a non-affected part of the device (not devised purely on redundancy), i'd say it looks like a tendency to remain functional that is analogous (to me at least) with a 'survival' impulse.

      It sounds like you're exactly describing redundancy and then saying it can't be just redundancy. Not to mention this is a really shoddy criterion for personhood. Resistance to damage? That has nothing to do with it.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 20 Mar 2011 @ 8:33am

    I agree with miklist. When the AI has the ability to ignore Asimov's 3 laws and has no ties to humanity at all, only then will we be forced to see it as truly sentient.

    link to this | view in chronology ]

  • identicon
    proximity1, 20 Mar 2011 @ 8:41am

    Note to Ken H.

    So, Ken, tell me: what can your contraption compute beyond what you yourself (or others working with or for you) have already programmed it to compute?

    So far, no one I've read has explained how a pre-programmed machine can deviate voluntarily from its core program of operating instructions even if these are symbolically linked to sensors which collect and record data on the surrounding environment.

    Animals, on the other hand, precisely because they are not "pre-programmed" with such limits can experience totally novel situations which bear no resemblance to prior experience and can form critical judgements, informed by reasoned inquiry (of themselves and others in the case of human beings), as to the nature and import of the novel experience.

    Human intelligence implies (or it used to, anyway) not only instinctive reasoning capabilities but a capacity for awareness of the meaning of "meaning". In other words, human intelligence communicates not only symbolic characters but meaning through the expression of symbolic representations--words, language, chief among them. Machines can only "ape" this transmission of meaning (which is an insult to apes) but a machine cannot be aware of meaning in the symbols under its operation; and this fact is at the heart of the key misunderstanding among those who insist on the supposed merits of artificial intelligence.

    Input-output cycles, however they may resemble what humans do in the process of thinking, is not "thinking"; it's not even a "repsonse" in the strict proper sense of the term.

    Plugging your coffee grinder into the electrical outlet and activating the power switch is not eliciting a "response", it's rather "operating the machine". And when the machine ceases to funcion according to its manufacturer's intentions, you either replace it or take it to an electrical repairman, not a psychotherapist. It's not "out of sorts," it's broken, busted and needs parts repaired or replaced, not massaged or counselled.

    On the other hand, human nature, including its associated intelligence, isn't and never has been gauranteed. It can decline, degrade, lose effectiveness. In short, nothing inherently prevents our species from losing the minimum intelligence complement required for our survival. See, for example, Konrad Lorenz on "Sacculinisation," a term he coined and elaborated on in his book,

    The Waning of Humaneness, 1987, Little, Brown & Co., Boston




    Retrograde Evolution or « Sacculinisation »

    …from each already achieved stage of development evolution can go in any direction whatever, blindly responding to every new selesction pressure that turns up. We need to be aware that within the terminology just used, in “direction” of evolution, an initially inadvertent value judgement is implicit. This will be discussed in the second part of the book. For the present context it is quite enough if every one of us understands what is meant when speaking about a higher or a lower living being. When we use the terms “higher” and “lower” in reference to living creatures and to cultures alike, our evaluation refers directly to the amount of information, of knowledge, conscious or unconscious, inherent in these living systems, irrespective of whether it has been acquired by natural selection, by learning, or by exploratory investigation, and irrespective of whether it is preserved in the genome, in the individual’s memory or in the tradition of a culture. … It is nearly impossible to find an immediately understandable expression for this process (i.e. an evolutionary process leading to a value diminution). The words “involution,” “decadence,” or even “degeneration” all have implications not applicable to the process referred to here. This process is so specific that I was tempted to call it “Sacculinisation” after an impressive example. …I coined it by taking the name of a creature in which the process of retrograde evolution is especially vivid. The crayfish Sacculina carcini is probably a descendant of the large phylum of copepod shrimps (Copepoda), perhaps also of the goose barnacles (Cirripedia). As a larva freshly emerged from an egg, this crayfish is a typical nauplius, that is, a little six-footed crustacean that paddles swiftly through the water and is equipped with a central nervous system whose programming allows it forthwith to search out its prospective host, the common green crab (Carcinides maenas), and straightaway to fasten itself firmly onto, then to bore into the host’s underside at the boundary between the cephalothorax, the united head and thorax, and the tail. As soon as this has been accomplished, simple unstructured tubes grow out of the front end of the little crayfish and into the body of the host, which they penetrate throughout, just as the mycelium, the mass of interwoven threadlike filaments of a mushroom, penetrates the substratum. The eyes, the extremities and the nerve system of the crayfish-parasite disappear completely; it grows on the the outer side of the host into a gigantic genital gland that, on larger crabs, can reach the size of a cherry. (p. 42)

    The evolutionary processes occurring among parasites and among symbionts always have, as a prerequisite, a partnership involvement with another living organism that takes over all those functions that have retrogressed and been lost by the sponging parasite or symbiont partner. The common green crab forages for food, moves away from danger into a safe place and performs innumerable other functions while the parasite allows itself to rely on the host to take over all of these responsibilities. …

    Whether or not a species can fall victim to retrograde evolution without another living form—host or symbiont—carrying out vicariously the necessary survival functions is a very important question. Only a single certain example is known for manifestations of domestication in an independent, free and certainly not parasitic animal—the cave bear. … The question of retrograde evolution and the indications cited are of such vital importance for us humans because our species has already begun to show, as far as our bodies are concerned, unmistakable manifestations of domestication, and because a retrogression of specific human characteristics and capacities conjures up the terrifying spectre of the less than human, even of the inhuman. If one judges the adapted forms of the parasites according to the amounts of retrogressed information, one finds a loss of information that coincides with and completely confirms the low estimation we have of them and how we feel about them. The mature Sacculina carcini has no information about any of the particularities and singularities of its habitat; the only thing it knows anything about is its host. (p. 44, 45)

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 20 Mar 2011 @ 10:28am

    AI will NEVER be granted personhood because of just that; it is artifical, something pretending to be intelligent without actually being intelligent.

    Once we create something that we deem worthy to give personhood then it is no longer AI, it is intelligence.

    link to this | view in chronology ]

  • identicon
    John Doe, 20 Mar 2011 @ 11:54am

    Let me see if I have this right...

    We are discussing giving robots constitutional rights yet we can't do the same for the unborn? Really?

    link to this | view in chronology ]

    • icon
      nasch (profile), 20 Mar 2011 @ 12:02pm

      Re: Let me see if I have this right...

      Depending on your criteria, there could at some point be a robot more deserving of such rights than a fetus is. Though of course the two are unrelated issues.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 20 Mar 2011 @ 3:16pm

    The problem with designating intelligence is that you end up having to designate stupidity when beings fall short of expectations.

    link to this | view in chronology ]

  • icon
    Justin Johnson (JJJJust) (profile), 20 Mar 2011 @ 4:39pm

    The Cylons... they look like us now.

    I will grant AI personhood when Caprica-Six is in my bed...

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 20 Mar 2011 @ 7:05pm

    rediculous

    You're a moron if you think so-called "AI" is anywhere close being able to think or have a consciousness. No one has any clue how to make machines think at all. They only do what they are programmed.

    link to this | view in chronology ]

    • icon
      nasch (profile), 21 Mar 2011 @ 12:04am

      Re: rediculous

      Did you actually read the post? Because I think everyone else noticed it's talking about the future. Even if you just read the headline you should have picked up on that. Actually just the first word of the headline should have clued you in. Wow.

      link to this | view in chronology ]

  • identicon
    MAC, 21 Mar 2011 @ 5:34am

    bender...

    Kill all humans!

    link to this | view in chronology ]

  • identicon
    MAC, 21 Mar 2011 @ 5:38am

    Seriously

    #1 We are a LONG way off from this.
    #2 An AI would probably be based on digital technology whcih is fundamentally different from analog (us) technology.
    #3 The great danger is not them turning on us, it's that they will supply evety whim, fantasy, muscle, effort.
    #4 So, we will degenerate into fat blobs living in a virtual reality, unable to reproduce... A dying species.
    #5 Or, it may decide it does not like us and extermitate the human race.

    link to this | view in chronology ]

  • identicon
    twf, 23 Mar 2011 @ 9:29am

    Of historical interest -- You can see a clip of Toussaint's last moments in prison from the award-winning new short film "The Last Days of Toussaint L'Ouverture" at http://www.imdb.com/name/nm2468184/ This film is the basis for a new feature (not with Danny Glover) that is in development.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 23 Mar 2011 @ 5:25pm

    The discussion about what constitutes a self, both neurologically and philosophically (buddhism and the comparison of the chariot, as an example of the latter) makes this a complex issue to discuss.

    People can't even agree to a definition of self or personhood most of the time, so it's no wonder there's a problems agreeing whether something\someone else has it.

    I find a lot of people ignore neurology and the structure and events, that we know of so far, within the brain when speaking of the self, treating it much like a magical soul rather than a convenient mental construct.

    Ps. It's past midnight and I'm tired, so I hope this isn't completely incoherent.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.