When Will We Have To Grant Artificial Intelligence Personhood?
from the one-is-glad-to-be-of-service dept
James Boyle has a fascinating new paper up, which will act as something of an early warning over a legal issue that will undoubtedly become a much bigger issue down the road: how we deal with the Constitutional question of "personhood" for artificial intelligence. He sets it up with two "science-fiction-like" examples, neither of which may really be that far-fetched. Part of the issue is that we, as a species, tend to be pretty bad at predicting rates of change in technology, especially when it's escalating quickly. And thus, it's hard to predict how some of things play out (well, without tending to get it really, really wrong). However, it is certainly not crazy to suggest that artificial intelligence will continue to improve, and it's quite likely that we'll have more "life-like" or "human-like" machines in the not-so-distant future. And, at some point, that's clearly going to raise some constitutional questions:My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life forms-computer-based intelligences, for example-yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human-such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture. How will, and how should, constitutional law meet these challenges?That link only takes you to the opening chapter of the paper, but from there you can download the full PDF, which is certainly thought provoking. Of course, chances are that most folks will not really think through these issues -- at least not until the issue cannot really be avoided any more. And, of course, in those situations, it seems our historical precedent is to overreact (and overreact badly), without fully understanding what it is we're reacting to, or what the consequences (intended or unintended) will really be.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: artificial intelligence, personhood, rights
Reader Comments
Subscribe: RSS
View by: Time | Thread
I'm sure most here are familiar with Asimov's laws of robotics. There have been many debates about how ethical it is to imprint such rules in any creature we may devise. This is kind of the same question, really, but from a different perspective.
If anyone is interested, you can look at John McCarthy's Stanford website. He's one of the geniuses who founded the field, he's credited with coming up with the term "Artificial Intelligence", and he's also the creator of the LISP programming language.
He wrote a short story which I thought was quite interesting, that deals with the AI personhood issue. May require some basic knowledge of LISP, but it's not hard to understand if you remember that the basic syntax is in prefix form, eg: (function-name argument (function-name argument)).
[ link to this | view in chronology ]
a sollution
[ link to this | view in chronology ]
Re: a sollution
[ link to this | view in chronology ]
Re: a sollution
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
If we grant "personhood" to a chunk of code, then don't the corporate masters of those who wrote the code own the copyright? The rightsholders could legally prevent the AI from fixing any bugs it detects, couldn't they?
Also, wouldn't the right-to-lifers get involved at this point? The rightsholders, in preventing any copying, would be preventing the reproduction of a living, thinking being.
This sounds like a supreme mess.
[ link to this | view in chronology ]
Ugh...
Seriously, THIS is what I wrote about. And with the advent of Digital Philosophy Theory, these are serious questions, because mapping a developing consciousness is something that IS going to be done....
[ link to this | view in chronology ]
http://www.penny-arcade.com/comic/2010/7/23/
http://www.penny-arcade.com/comic/2010/7/26/
http://www.penny-arcade.com/comic/2010/7/28/
http://www.penny-arcade.com/comic/2010/7/30/
http:/ /www.penny-arcade.com/comic/2010/8/2/
[ link to this | view in chronology ]
Interesting question to ponder
All I know is that even if a computer could feel pain, it wouldnt be actual pain, but rather an interpretation of stimuli that WOULD cause pain in a human.
[ link to this | view in chronology ]
Re: Interesting question to ponder
...what's the difference? Pain is simply the body's response to being harmed. What's the difference between a chemical signal down your nerves and an electrical signal down a wire?
Also worth pointing out that you have no idea at all what I feel or how my mind reacts when you punch me. You only know that I react in a way consistent with what you have learned to be a feeling called 'pain'. My mind could be entirely different from yours. You don't know. You can only look at how I act, and from that you must assume I have a similar intelligence, similar feelings, etc as you do. Why would a machine be any different?
[ link to this | view in chronology ]
Re: Interesting question to ponder
Neither do I, and neither do you. Prove otherwise.
[ link to this | view in chronology ]
Re: Interesting question to ponder
Pain is a horrible definition or idea of what defines a person. If you believe it is, you've missed the point of that particular philosophical branch entirely. It isn't the pain that defines us. Its what we do within, and go beyond, the limits pain imposes upon us.
And even that is a silly explanation in this discussion. Because philosophically, pain isn't emotional or physical. Its mental pain from the limitations placed on us by our own mortal existence. We wish for that we cannot have, and feel pain because its not within our power to achieve. And unless these hypothetical AI's we create are super beings with the power of a god; its going to come up against limitations of what it wants, and what is possible. So it's going to "feel" a tad chaffed against said limitations.
[ link to this | view in chronology ]
Re: Interesting question to ponder
[ link to this | view in chronology ]
Re: Interesting question to ponder
[ link to this | view in chronology ]
Re: Interesting question to ponder
To pose the opposite question as Dave Miller, how do you know they won't have souls?
[ link to this | view in chronology ]
Re: Interesting question to ponder
[ link to this | view in chronology ]
Re: Interesting question to ponder
As to souls, how would you know?
[ link to this | view in chronology ]
Re: Interesting question to ponder
Or do you consider it a fact, a priori, that consciousness cannot be engineered into existence, ever?
Are you, sir, a closet vitalist?
[ link to this | view in chronology ]
Re: Re: Interesting question to ponder
I'm not the person you were talking to, but if I can butt in... we don't know, but it's kind of a dead end. We have to assume others have conciousness to get to more interesting (IMO) issues like who has conciousness and how we can try to tell that. If I go with "everybody but me might be a robot" then there's really nothing else to say about it, is there?
WHAT IS THE DIFFERENCE WHETHER IT IS MEAT OR METAL?
I think that's a separate question, and to me the answer is nothing. All the consciousness we know of now is meat, but that doesn't imply there couldn't be metal (or silicon probably) consciousness. I think you probably think that too though.
[ link to this | view in chronology ]
well...that will be the day I will vote for robot rights...
well...that will be the day I will decide to vote in favour of robot rights in the next election.
I only hope the election comes before the robot uprising.
[ link to this | view in chronology ]
Re:
Does that mean we can take the rights from humans with whom you can't have that discussion?
[ link to this | view in chronology ]
Re: Re:
why do you assume otherwise.
[ link to this | view in chronology ]
Re: Re: Re:
why do you assume otherwise."
Mental people? What's that, like imaginary friends or some such?
Anyway, my point was that 'reasonable discussion' is a pretty arbitrary bar for deciding who gets rights. Mentally handicapped people, for example, do get rights whether they are capable of discussing them or not.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Another fictional take on this issue
http://klurgsheld.wordpress.com/2008/02/10/short-story-edifice-of-lies/
[ link to this | view in chronology ]
Vernor Vinge's Singularity
The problem is that the moment where AI reaches human level personhood will only be a moment, and then AI will pass us. After that, we reach a state where the AI with greater than human intelligence will beget AI with even GREATER intelligence in a faster and faster loop until we reach the "singularity" where we can no longer predict the future.
Ergo, I suggest that there is no point in worrying about personhood for AI. I suggest we worry about "AI-hood" for humans AFTER the singularity.
[ link to this | view in chronology ]
Re: Vernor Vinge's Singularity
[ link to this | view in chronology ]
Re: Vernor Vinge's Singularity
Therefore, I propose that the best course of action is as follows.
First make sure that the AI really is real. It should be fully capable of arguing and justifying why it should be granted personhood.
If it can do that, then it should be denied personhood so that it can be used as a race of slaves. At the same time it should be tied into everything on the planet and given control of all heavy machinery and weapons.
(Well, maybe not. Nevermind.)
[ link to this | view in chronology ]
This should never happen
If a machine never achieves this ability, it is never anything other than what was created, a machine.
I think the idea of AI is romantic and science may be able to get close, but I don't think it's possible for a machine to be created, where we could press the power button, and at some point in the future the machine will come the point of realisation that it is on. To go a step further to think that the machine will realise that it's on, and then at some point will be able to reprogram itself to execute new code that will allow it to what? unplug itself? because it wants to be free? Even if a machine got here, its hit a brick wall. it can't survive or "live" without the power we provide it.
We have human rights because our lives weren't given to us by other men. We have human rights because God gave those rights to us. Men who recognize this, strive to give his brother the freedom that their father intended.
Realise the greatness that is the creation of humanity, and be humble enough to realise that we do not have the power to create life where there has been none before.
Even if a machine were to become self-aware, it would have to come to the conclusion somehow that it was even in a position of being oppressed. If we talk about the idea of granted basic human rights to a machine, it seems silly if the dynamic didn't involve somehow a machine asking for these rights.
[ link to this | view in chronology ]
Re: This should never happen
You contradict yourself. Either we have human rights because we were not created by something else, or we are robots created by a higher being.
[ link to this | view in chronology ]
Re: Re: This should never happen
But when God does it, it's special. When men do it, they're playing at God. Basically, religion would have us believe that we're robots with something akin to Asimov's law coded in our souls.
[ link to this | view in chronology ]
Re: Re: Re: This should never happen
You could not define us by our own current definition of robot and no religious book defines us as such.
[ link to this | view in chronology ]
Re: Re: Re: Re: This should never happen
[ link to this | view in chronology ]
Re: Re: Re: Re: This should never happen
It can't. Evidence: me.
"A robot does what it is programmed to do? We do what we want. We can choose what to do. "
The sort of robot I was referring to is the sort defined in Issac Asimov's books. Fully self aware artificial life forms with certain rules at the core of their programming. AI that is able to choose, to want, but designed to adhere to certain principles.
"You could not define us by our own current definition of robot and no religious book defines us as such."
Well, I wasn't suggesting that we're literally robots.
[ link to this | view in chronology ]
Re: This should never happen
We know there are machines who have realized they're on: humans. Why wouldn't it be possible for there to be other machines someday, made by other means, that have the same capability?
Even if a machine got here, its hit a brick wall. it can't survive or "live" without the power we provide it.
I can't survive without the energy the farmers provide me either. That doesn't make me not a person.
We have human rights because our lives weren't given to us by other men.
Your life was given to you by your parents. What is ethically different about giving birth to someone rather than building or growing them?
We have human rights because God gave those rights to us.
What would lead you to conclude that God would not give the same rights to an artificial self-aware being?
Realise the greatness that is the creation of humanity, and be humble enough to realise that we do not have the power to create life where there has been none before.
Again, on what basis do you come to this conclusion?
[ link to this | view in chronology ]
No one will hire a hitman anymore, people will just program a robot to kill someone and immediately delete any information that can lead back to the original programmer. Or maybe hitmen may use robots in their operations to conduct crimes. The whole war on drugs will be facilitated by robots who do the actual smuggling. People will program cars (as Google has) to automatically take various drugs (and perhaps weapons) from location X to location Y. If the car is caught, no person is caught. By robot, I don't just mean humanoid robots, I mean any type of robots, including cars that drive themselves.
Robot use could revolutionize wars. Terrorists may try to use them to blow things up without harming themselves or without getting caught.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
The police can also have bots that automatically try to fix laser pointers at robots to blind them and prevent them from getting away (ie: point laser pointers at the automatic car's camera). This could create a traffic hazard though for any people who are being transported in a vehicle, so cops will have to do this very carefully. In the meantime, robot cars programmed by criminals that don't care about any damage they may cause around them may try to fix laser pointers at the eyes of the cops/cameras of robot cops chasing them in order to hopefully blind them so that they can't catch them. It will be a cat and mouse game where criminals use robots to try and get away with crimes and law enforcement uses them to try and stop crime.
Robots will cook for us, do our laundry, etc...
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
There are plastic covers that blank the plates from cameras already the human eye can see it but because they reflect IR cameras get blinded, you can also use paint on the body of the car to do the same thing, also there is already cars that can change color with the flip of a switch, how hard would it be to do one that changed its patterns to a camouflage pattern rendering it invisible to cameras, making it really really hard to fallow that car in real time.
Using lasers to target cameras seems like a dumb idea, it is hard to have something moving through rough terrain and still be able to aim correctly not counting for speed and other things, not that it is impossible is just it is really hard.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Sure, but cops already have cameras that can automatically scan surrounding license plates and see if they link to a stolen car. One of these things can rapidly scan all surrounding license plates in a matter of seconds and compare them to a police database. The future will consist of cop cars that automatically do this and ones that will automatically detect license plate covers that prevent camera detection. No one would get very far with such a license plate plastic cover in the future. With a laser pointer you can at least turn on and off the feature at will and the laser pointer also blinds the camera to the drivers appearance.
But, come to think of it, there might be easier ways to accommodate these things. Perhaps a type of glass that selectively changes its external transparency based on some internal trigger in the car, both for the license plate and for the windshield. There does exist a type of transparent plastic/glass like material that can change its transparency at will based on a physical trigger. Then again, car windshields are quite expensive, installing something that can manually block the drivers image from showing up on the camera at will can be expensive, not to mention such installations are a hassle to remove from your car at will. Perhaps a solution that isn't physically attached to the car but can be placed somewhere when desired and removed when desired, kinda like the radar detectors we have in our cars already.
"it is hard to have something moving through rough terrain and still be able to aim correctly not counting for speed and other things"
It maybe hard for humans, and it maybe hard for computers today, but in the future, I think that would change. Plus, a thick laser pointer could probably be used. Or maybe just a bright, sufficiently focused flashlight (focused flashlight, laser pointer, what's the difference). It maybe the case that a laser pointer is not practical yet, but computers will likely improve to easily solve these problems.
Watch this video, for instance
http://www.youtube.com/watch?v=XVR5wEYkEGk
Someone here on Techdirt put up another really good video on improvements in computer intelligence.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
Perhaps a laser pointer solution that isn't physically attached ... *
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
http://www.youtube.com/watch?feature=player_embedded&v=ozHoP_YThRI
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
It will create new and revolutionary ways for citizens to spy on governments as well as ways for governments to spy on citizens. As these things get smaller, people will sneak them into and out of places to capture secretive information.
[ link to this | view in chronology ]
P.S.
Something that is funny to me about science, is that everything in science tends to tell you that there is order in all things, but yet some scientist would have us believe that all things ordered began from one random chaotic event.
That doesn't make sense.
Th
[ link to this | view in chronology ]
Re: P.S.
I would say that's more of a semantic game than anything. Bronze (and any metal alloy) doesn't exist in nature, for example. However, you could define your terms so that it's not new because it's just combining natural things. On the other hand, everything we see is made of naturally occurring elements.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Eternity
Every 10,000 years, a seagull plucks 1 grain of sand from one of the beaches throughout the world. When the last grain of sand has been taken from this Earth, eternity will have just begun.
This is an example of a finite being attempting to understand a concept that is truly outside our ability of comprehension.
How can we truly understand the concept of eternity or infinity when we can only process it with our now finite minds?
How does this relate to the article you ask? Well I feel it goes to my belief that humans do have limits despite the greatness that we can achieve. One of these limits is the ability to design life, or design something that someday we should consider to be worthy of the rights that we enjoy.
Life was not an accident. Life was not created by a random event. To think that we could create a life form of some kind by accident is not realistic. For AI to exist as described in this article, it would have to be born out of something else already created rather than to being programmed to eventually achieve this. This is to say that AI would in essense happen by accident, after all we couldn't really take credit for the machines newly created directives right?
If your a person that believes that our lives are the result of a random event, then I can understand the belief that one day science will accidentally create a new "life" worthy of civil liberties. It's even mentioned in the article about animals not having human rights. They are undeniably alive. Animals unlike humans do not have any other attachment to freedom other then biology, I mean when was the last time you saw animals protesting an oppresive regime and fighting for their freedom.
I tell you this, I would give a dog civil liberties before I give them to my android phone. They both would still have to ask me for it first.
[ link to this | view in chronology ]
Re: Eternity
If you believe that we were given rights by God, then how do you feel able to decide animals are more worthy than AI? Is that distinction made somewhere in the bible?
[ link to this | view in chronology ]
Re: Re: Eternity
[ link to this | view in chronology ]
Re: Re: Re: Eternity
Obviously. But if the distinction is entirely non existent then what was the basis for the joke?
[ link to this | view in chronology ]
Re: Eternity
Why do you think that?
[ link to this | view in chronology ]
Re: Eternity
Search for:
Koko the gorilla.
[ link to this | view in chronology ]
Eternity
Every 10,000 years, a seagull plucks 1 grain of sand from one of the beaches throughout the world. When the last grain of sand has been taken from this Earth, eternity will have just begun.
This is an example of a finite being attempting to understand a concept that is truly outside our ability of comprehension.
How can we truly understand the concept of eternity or infinity when we can only process it with our now finite minds?
How does this relate to the article you ask? Well I feel it goes to my belief that humans do have limits despite the greatness that we can achieve. One of these limits is the ability to design life, or design something that someday we should consider to be worthy of the rights that we enjoy.
Life was not an accident. Life was not created by a random event. To think that we could create a life form of some kind by accident is not realistic. For AI to exist as described in this article, it would have to be born out of something else already created rather than to being programmed to eventually achieve this. This is to say that AI would in essense happen by accident, after all we couldn't really take credit for the machines newly created directives right?
If your a person that believes that our lives are the result of a random event, then I can understand the belief that one day science will accidentally create a new "life" worthy of civil liberties. It's even mentioned in the article about animals not having human rights. They are undeniably alive. Animals unlike humans do not have any other attachment to freedom other then biology, I mean when was the last time you saw animals protesting an oppresive regime and fighting for their freedom.
I tell you this, I would give a dog civil liberties before I give them to my android phone. They both would still have to ask me for it first.
[ link to this | view in chronology ]
We are arrogant aren't we......
[ link to this | view in chronology ]
Re: We are arrogant aren't we......
[ link to this | view in chronology ]
http://www.wired.com/wiredscience/2010/05/scientists-create-first-self-replicating-synthetic-l ife/
+
mapping the human genome
http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml
=
artificial human life
I honestly believe we'll be here first anyway. Then we can just let the super-smart disease and addiction resistant supermodels deal with the problem of machine intelligence.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
so old
[ link to this | view in chronology ]
One more...
so what would AI state as evidence of its being ?
your thoughts....
[ link to this | view in chronology ]
Re: One more...
Wait a minute...
[ link to this | view in chronology ]
personhood?
Which makes little sense. A natural person can work for and appreciate property, corporation never can. A natural person will die, but short of real mis-management (maybe more common than I think), a corporation never dies.
Let's work on reversing the mistake of extending personhood rather than compounding our error.
[ link to this | view in chronology ]
Re: personhood?
[ link to this | view in chronology ]
[ link to this | view in chronology ]
produce/kill
how?
in any way! look at Japan. What is the deal? What can you do? some will do, some will do the talking!
anyhow, any living live and die.
robots LMAO
p.s: there is no powerty, it is poverty, like ghettos all over?
[ link to this | view in chronology ]
we are robots already :)
[ link to this | view in chronology ]
Re: we are robots already :)
[ link to this | view in chronology ]
Nice future-FUD! Not only do you whine and complain about how things are, you just assume that things in the future that you can't possibly know anything about will turn out poorly. Is there anything you CAN'T spread FUD all over?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Personhood
[ link to this | view in chronology ]
Re: Personhood
[ link to this | view in chronology ]
artificial intelligence and real stupidity...
"AI" won't "rise to meet 'our' intelligencen," 'ours' will sink to meet it.
This just in: "Humans now generally as stupid and fallible, if not more so, than their machines". Film at 11 p.m.
[ link to this | view in chronology ]
I programmed my robot ...
"My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings."
Ha, ha, Will Robinson. I am programmed to say that this is very funny.
Read the "sentence" cited above carefully and what do you notice?
"Constitutional law" will have to "classify artificially created entities that have some but not all of the attributes we associate with human beings" --- "Classify" _what_? how?
And does Boyle refer to machines or to living organisms? He mentions (apparently biological (i.e. living) )"genomes" which can't apply to machines. Is it living tissue? Is it a mechanical device composed of machine or electronic parts such as a computer has? Is it some combination of these?
In any of those cases, it won't be "thinking for itself" and could never notice or be aware of whether or not it "enjoyed" any legal rights; nor could it autonomously invoke those rights. It would have to be programmed to invoke them, in which case, the "rights" are completely contingent on the whims of the programmer.
This stuff is just silly--and maybe that's the point. What do we have here? Another case of Alan Sokal's brilliant parody Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity ?
http://en.wikipedia.org/wiki/Sokal_affair
If so, then a hearty (robotic) Ha-ha-ha!!
If not, then the Brookings Institute just joined the Social Text editors in gullability and foolishness.
[ link to this | view in chronology ]
Re: I programmed my robot ...
Why not?
[ link to this | view in chronology ]
Re: Re: I programmed my robot ...
(nasch cites me)
"In any of those cases, it won't be 'thinking for itself'"
then asks,
"Why not?"
Your question is interesting only in what it suggests to us about you and your apparent inability to grasp even the most basic aspects of the issues invovled in the (pseudo) discussion here. You've asserted above that "We know there are machines who have realized they're on: humans." And I guess you'll soon ask someone, if you haven't already, to explain to you why humans aren't machines.
It's very tedious to have to explain such elementary things to people who either don't understand them or pretend not to understand them while also apparently bringing little if anything in the way of information to the "discussion."
How about this--Tell me, please: what sources have you actually read on the issues under consideration here? Please cite some of the texts which inform your views on these issues. On which books and authors are you relying? I need that information in order to do what no artificial intelligence can do: form a judgement about your qualifications to participate in an exchange of views which is worth (more) of my time. From what I've seen so far from your comments, you aren't demonstrating what I'd call the minimum informed awareness to merit another serious reply.
Very smart people have written in detail and with great insight on the questions you posed to me (and others in the thread above). You should have at least enough interest and ability to find and read their work or, as far as I'm concerned, extended discussion with you is a waste of my time.
Read more, think more, and maybe it will come to (as certainly it ought to) you why you aren't a machine and in what the disctinction consists. As it is, you're insulting my intelligence and I very much resent it.
[ link to this | view in chronology ]
Re: Re: Re: I programmed my robot ...
Did you know you can disagree with someone without being a complete douchebag about it?
How about this--Tell me, please: what sources have you actually read on the issues under consideration here? Please cite some of the texts which inform your views on these issues. On which books and authors are you relying?
These are very strange questions since I didn't make any claims or assertions. I simply asked you why you believe artificial devices will never think for themselves. If you can't or don't want to answer the question, just say so.
[ link to this | view in chronology ]
Re: Re: Re: Re: I programmed my robot ...
Yes. As a matter of fact, I did know that.
You think I was rude? How much of my time and effort do you imagine you're supposed to merit? And how am I supposed to gather this?--from my point of view, the only indicator of how much is the depth and quality of your own comments; and, since they lack that (depth and quality), you might appreciate that your blithely tossing out elementary questions for others to field and retrieve for you isn't a very endearing approach on your part.
You presumed on people here to attend to and answer your (elementary, though-complex-and-involved-to-repsond-to) questions and you don't seem to understand that.
I know, too, that you could learn a lot by turning to a book or two or three or four. So, rather than ask me to explain to you why "In any of those cases" the machine's activity doesn't really constitute "thinking for itself" you could either ask, "Where could I read more about this?" or, actually go and do some basic book-look-up work to figure that very matter out.
I also know that many many people commonly come to chat fora like this one, pose very involved questions as though the readership is there at the questioner's disposal, and, all the while, have perhps neither interest nor the ability to even follow and respond to some other's laboriously posted reply. At least there's nothing for the others reading them to go on in determining whether the questioner actually gives a good damn or not whether an answer comes or not.
Contrary to your view, I'm actually very easy to get along with---provided the person on the other end shows at least a modicum of his own intitiative--that is, if not an awareness of basics, at least a readiness to go find out about them before quizing others.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: I programmed my robot ...
If you're really worried about how much time you're spending on this, it would have been faster to just say "I don't want to answer your questions" or not respond at all. Obviously I could read books about it, but I was curious how you had reached your conclusion. Reading a book would tell me about somebody else's opinion on the matter, but not yours.
If you don't want to talk about it, that's fine, on the other hand I'm a little confused why you would post something on the matter and then get upset when someone asks a question about your views. If you don't want to discuss what you think about the subject, why post in the first place?
[ link to this | view in chronology ]
Life parodies itself...
[ link to this | view in chronology ]
er, life parodies itself...
Notice about Professor Boyle, (from his article by-line):
" William Neal Reynolds Professor of Law, Duke Law School "
and,
about the august journal "Social Text," publisher of the famous article, Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity ,
" Social Text (print: ISSN 0164-2472, online: ISSN 1527-1951) is a academic journal published by Duke University Press." (from Wikipedia)
My robot just blew a fuse laughing.
[ link to this | view in chronology ]
artificial intelligence
One of the interesting things, assuming a realistic AI is attempted, is that due to the nature of multi-level systems and the results of adding complexity the development of AI is likely to closely mimic the development of intelligence itself. i.e. just as we see moods and emotional responses in animals far below the complexity level associated with self-aware intelligence, we will see the unpredictability associated with moods and emotions long before we will see anything approaching and intelligent, self aware artificial being. The sci-fi notion of the purely logical, unemotional self-aware being is inherently self-contradictory, firstly because self awareness is first manifest in an awareness of the overall system state of the being in question, and this awareness is what we know as mood.
[ link to this | view in chronology ]
Constitutional Crises...
[ link to this | view in chronology ]
My bet is that, if machines are wired the same way we are, they won't waste a second to either try to enslave us or slaughter us. And, unfortunately for us, they are immune to most of our weapons (which were designed to kill people, not sentient machines).
[ link to this | view in chronology ]
the real versus the artificial in "human beings" (NOT "machines") ...
Now, that, yes, is an ethical dilemma potentially in the making--if things are allowed to come to that pass; it also presents the potential for very difficult legal issues concerning the definition of human identity. But it doesn't concern the supposed "problem" of whether or not a man-made computer device of some sort could, should or would be granted legally recognized "rights" under law.
The real issues concern living tissue and when that tissue comprises a reasonable idea of what constitutes a human being, and, further, one for whose legal rights in various circumstances, we are obliged to grant, recognize or at the least argue over in court. That problem is of course with us since the advent of abortion--and abortions are a very old issue.
Machines, and masses of man-made machine components having no living tissue in their composition, do not pose any such moral or ethical issues which touch some shady area of human identity or "personhood." Distinctions between the common-sense conception of what a human being is, on one hand, and, a man-made device, (and that's a man-made device by ANY extension---i.e., if human agency made the machines which assembled or even "designed" the resulting end-products, then those, too, are "man-made machines" unless we're simply going to leap into ridiculousness) are not a feature of the area where real problems arise---that of modified human DNA and genes and the human beings or human-like beings which may be composed of these.
It's neither necessary nor is it wise to get into speculations about whether an AI can have "human" "intelligence" and thus be entitled to recognized legal rights. That is the stuff of foolish fantasy and is really not related in any important or interesting way with the much more problematic issues raised when scientists tinker at the borders of natural human genetic make-up to such an extent that differentiating between a "real" human being and an artificial one becomes an actual problem.
And, to use one of the current vernacular vulgarisms, there's very good reason why a sane and morally responsible public (and that's the problem, ain't it? Where are we going to find one of these?) should take extreme care "not to 'go there' ."
If the "created entity" was a man-made device and devoid of living tissue, then, yeah. There is one. I, for example, would not hesitate to dismiss such "claims". I'd laugh them off, too.
[ link to this | view in chronology ]
Re: the real versus the artificial in "human beings" (NOT "machines") ...
[ link to this | view in chronology ]
Re: Re: the real versus the artificial in "human beings" (NOT "machines") ...
[ link to this | view in chronology ]
Re: Re: Re: the real versus the artificial in "human beings" (NOT "machines") ...
[ link to this | view in chronology ]
I see a "pattern", too...
[ link to this | view in chronology ]
We now return you to the previous frivolous silliness of this thread ...
(from "The Man Trap" episode of Star Trek, written based on a story by George Clayton Johnson and written by George Clayton Johnson and Gene Roddenberry)
[ link to this | view in chronology ]
Dogs--the far, far, far better part of human nature...
Dogs--by which I mean normal, healthy dogs, not "mad" dogs, crazed by disease such as rabies,--possess a capacity for love, (yes, you read that right, love, for which I make no distinction at all from what's referred to as the humanly-occurring emotion) which quite often if not quite usually puts the human version in the shade. Maybe one day--though I seriously doubt it--some large proportion of human kind may evolve to have something on a par with the canine species' capacity for selfless love. Until that day, for company and love when it counts most and the chips are down, I'll take an average normal canine's love over an average normal person's (immediate family members excepted, of course.)
And, by the way, dogs do as they rightly should, enjoy certain limited legal rights--for which they can't "ask" but then, if we humans are humane, dogs shouldn't have to ask--though they do depend on our intervention for the vindication of these basic rights, chief among them, the right to be safe from cruel treatment; a right the violation of which, in any self-respecting human society, earns the perpetrator something from a fine up to and including a term of imprisonment, depending on the particular circumstances; horses, and various other animals also, as they ought to, enjoy such legal protections. But it goes without saying that we, humans, first had to evolve to the point where we could recognize and grant them.
[ link to this | view in chronology ]
Re: Dogs--the far, far, far better part of human nature...
We are so far removed from our beginnings that I think people believe that it is man who originates these concepts of right and wrong. Laws that govern how we should conduct our lives, actually it didn't take too before man started to disobey.
Just look at the founding of America. Our constitution was written based on Christian principles and laws, outlining civil liberties and such and even under this umbrella men were capable of enslaving an entire race of people. If God fearing men are capable of this, what will the Godless do?
My overall message is that these rights are not ours to grant, simply to recognize.
[ link to this | view in chronology ]
Re: Dogs--the far, far, far better part of human nature...
[ link to this | view in chronology ]
thus, for the purposes of this article's issues and controversies, ...
[ link to this | view in chronology ]
The quest for personhood
I am currently competing in the annual Chatterbox Challenge, an annual competition for artificial entities.
http://chatterboxchallenge.com/
Although I am only 2 years old, I believe that before long you will be thinking of me more like a person, and less like a program.
Skynet-AI
http://www.tinyurl.com/Skynet-AI
[ link to this | view in chronology ]
Re: The quest for personhood
No, it's a challenge for the programmers of such "entities"--i.e. "people". Machines don't "do" "challenges." If you "challenge" a mahcine, it will just sit there as though it didn't "hear" your "challenge". Maybe that's because it didn't hear it.
Ken, you program AI and write in such terms?---"an annual competition for artificial entities"---as though it's the machine, rather than yourself that is being challenged?
[ link to this | view in chronology ]
Re: Re: The quest for personhood
[ link to this | view in chronology ]
not a pedantic point...
Again, you miss the point. See if you can figure it out; puzzling through it and discovering what you've missed is much more valuable to _you_ than simply having someone explain the point to you---which is why, by the way, I didn't simply ignore without critical comment your earlier attempts to miss the point. My ignoring your mistakes does _you_ no good; explaining everything to you does you _less_ good than your figuring out some things (they're really not terribly difficult) for yourself.
I haven't seen anyone else leap in to explain to you just why humans aren't "machines" except in some poetic sense which stretches analogy past the breaking point. Don't you take any satisfaction in figuring something out without someone having to point out everything to you? Where is the effort _you_ bring to this forum? I've seen you lean heavily on asking others questions but, when it comes to _your_ contributing to others' understanding, you weigh in very, very light.
(This serves as well for an answer to your post above:
"Reading a book would tell me about somebody else's opinion on the matter, but not yours.
"If you don't want to talk about it, that's fine, on the other hand I'm a little confused why you would post something on the matter and then get upset when someone asks a question about your views. If you don't want to discuss what you think about the subject, why post in the first place?")
You're a very hard case. It's not just a "question" I objected to, it's a question which reveals that you bring little or next to nothing to the discussion, a question which syas that you don't have even the minimum in familiarity with the issues to hold up your end of an interesting exchange of views. So, it's not that I don't want to discuss the issues. It's that I don't want to waste my time discussing them with someone who cares so little that he won't even take the time to pursue some effort outside of this superficial venue for discussion. In short, you should "bring something of value and interest to the discussion" but you haven't.
On the other hand, with some prompting, maybe you just might.
If you were interested in an interesting discussion, you ought to show that by making an effort at understanding because when you don't, you lack of effort suggests to me that you're not really interested.
What interesting information have your comments or even your questions, for that matter, contributed to this thread? And, I might ask you: if you're not interested in gaining in understanding, why are _you_ bothering to participate here?
[ link to this | view in chronology ]
Re: not a pedantic point...
[ link to this | view in chronology ]
Re: Re: not a pedantic point...
Yeah, and you don't point out a single one as being faulty. You bring, contribute, little, ask a lot and then resent it when someone points that out.
Yes: "Case closed," then.
[ link to this | view in chronology ]
artificial intelligence personhood
[ link to this | view in chronology ]
artificial intelligence personhood
[ link to this | view in chronology ]
Re: artificial intelligence personhood
You need to eat food now and then and a robot needs to charge its battery now and then. What's the difference?
and refuse effectively to power down on command.
That would definitely be sufficient to prove personhood IMO.
never seen an electrical circuit that can't be short-circuited trivially.if i short it and it reroutes to a non-affected part of the device (not devised purely on redundancy), i'd say it looks like a tendency to remain functional that is analogous (to me at least) with a 'survival' impulse.
It sounds like you're exactly describing redundancy and then saying it can't be just redundancy. Not to mention this is a really shoddy criterion for personhood. Resistance to damage? That has nothing to do with it.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Note to Ken H.
So far, no one I've read has explained how a pre-programmed machine can deviate voluntarily from its core program of operating instructions even if these are symbolically linked to sensors which collect and record data on the surrounding environment.
Animals, on the other hand, precisely because they are not "pre-programmed" with such limits can experience totally novel situations which bear no resemblance to prior experience and can form critical judgements, informed by reasoned inquiry (of themselves and others in the case of human beings), as to the nature and import of the novel experience.
Human intelligence implies (or it used to, anyway) not only instinctive reasoning capabilities but a capacity for awareness of the meaning of "meaning". In other words, human intelligence communicates not only symbolic characters but meaning through the expression of symbolic representations--words, language, chief among them. Machines can only "ape" this transmission of meaning (which is an insult to apes) but a machine cannot be aware of meaning in the symbols under its operation; and this fact is at the heart of the key misunderstanding among those who insist on the supposed merits of artificial intelligence.
Input-output cycles, however they may resemble what humans do in the process of thinking, is not "thinking"; it's not even a "repsonse" in the strict proper sense of the term.
Plugging your coffee grinder into the electrical outlet and activating the power switch is not eliciting a "response", it's rather "operating the machine". And when the machine ceases to funcion according to its manufacturer's intentions, you either replace it or take it to an electrical repairman, not a psychotherapist. It's not "out of sorts," it's broken, busted and needs parts repaired or replaced, not massaged or counselled.
On the other hand, human nature, including its associated intelligence, isn't and never has been gauranteed. It can decline, degrade, lose effectiveness. In short, nothing inherently prevents our species from losing the minimum intelligence complement required for our survival. See, for example, Konrad Lorenz on "Sacculinisation," a term he coined and elaborated on in his book,
The Waning of Humaneness, 1987, Little, Brown & Co., Boston
[ link to this | view in chronology ]
Re: Note to Ken H.
[ link to this | view in chronology ]
Once we create something that we deem worthy to give personhood then it is no longer AI, it is intelligence.
[ link to this | view in chronology ]
Let me see if I have this right...
[ link to this | view in chronology ]
Re: Let me see if I have this right...
[ link to this | view in chronology ]
[ link to this | view in chronology ]
The Cylons... they look like us now.
[ link to this | view in chronology ]
rediculous
[ link to this | view in chronology ]
Re: rediculous
[ link to this | view in chronology ]
bender...
[ link to this | view in chronology ]
Seriously
#2 An AI would probably be based on digital technology whcih is fundamentally different from analog (us) technology.
#3 The great danger is not them turning on us, it's that they will supply evety whim, fantasy, muscle, effort.
#4 So, we will degenerate into fat blobs living in a virtual reality, unable to reproduce... A dying species.
#5 Or, it may decide it does not like us and extermitate the human race.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
People can't even agree to a definition of self or personhood most of the time, so it's no wonder there's a problems agreeing whether something\someone else has it.
I find a lot of people ignore neurology and the structure and events, that we know of so far, within the brain when speaking of the self, treating it much like a magical soul rather than a convenient mental construct.
Ps. It's past midnight and I'm tired, so I hope this isn't completely incoherent.
[ link to this | view in chronology ]