When Will We Have To Grant Artificial Intelligence Personhood?
from the one-is-glad-to-be-of-service dept
James Boyle has a fascinating new paper up, which will act as something of an early warning over a legal issue that will undoubtedly become a much bigger issue down the road: how we deal with the Constitutional question of "personhood" for artificial intelligence. He sets it up with two "science-fiction-like" examples, neither of which may really be that far-fetched. Part of the issue is that we, as a species, tend to be pretty bad at predicting rates of change in technology, especially when it's escalating quickly. And thus, it's hard to predict how some of things play out (well, without tending to get it really, really wrong). However, it is certainly not crazy to suggest that artificial intelligence will continue to improve, and it's quite likely that we'll have more "life-like" or "human-like" machines in the not-so-distant future. And, at some point, that's clearly going to raise some constitutional questions:My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life forms-computer-based intelligences, for example-yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human-such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture. How will, and how should, constitutional law meet these challenges?That link only takes you to the opening chapter of the paper, but from there you can download the full PDF, which is certainly thought provoking. Of course, chances are that most folks will not really think through these issues -- at least not until the issue cannot really be avoided any more. And, of course, in those situations, it seems our historical precedent is to overreact (and overreact badly), without fully understanding what it is we're reacting to, or what the consequences (intended or unintended) will really be.
Filed Under: artificial intelligence, personhood, rights