Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
from the but-will-skynet-let-it-happen? dept
As the march of technology progresses, folks are coming up with all kinds of interesting questions regarding the machines we use every day. I wrote a while back about a one researcher questioning whether or not robots deserve rights, for instance. On the flip side of the benevolence coin, I also had the distinct pleasure of discussing one sports journalist's opinion that we had to outlaw American football as we know it today for the obvious reason that the machines are preparing to take over and s#@% is about to get real.Hyperbole aside, one group is proposing a more reasonable, nuanced platform to study possible pitfalls regarding technology and mankind's dominance over it.
A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose "extinction-level" risks to our species.Now, it would be quite easy to simply have a laugh at this proposal while writing off concerns about extinction-level technological disasters as being the thing of science fiction movies, and to some extent I wouldn't disagree with that notion, but this group certainly does appear to be keeping a level head about the subject. There doesn't seem to be a great deal of fear-mongering coming out of group, unlike what we see in cybersecurity debates, and the founding members of the group aren't exactly luddites. That said, even some of the group's members seem to realize how far-fetched this all sounds, such as Huw Price, the Bertrand Russell Professor of Philosophy and one of the group's founding members.
"Nature didn't anticipate us, and we in our turn shouldn't take AGI for granted. We need to take seriously the possibility that there might be a "Pandora's box" moment with AGI that, if missed, could be disastrous. I don't mean that we can predict this with certainty, no one is presently in a position to do that, but that's the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies."Unfortunately, the reasonable nature of Price's wish to simply study the potential of a problem does indeed lead to what seems to be laughable worries. For example, Price goes on to worry that an explosion in computing power and the possibility of software writing new software will relegate humanity to the back burner in competition with machines for global resources. My issue is that these researchers appear to equate intelligence with consciousness. Or, at the very least, they assume that a machine as intelligent as or even more intelligent than a human being will also have a human's motivation for dominance, expansion, or procreation (as in writing new software or creating more machines). Following the story logically, and having written a fictional novel discussing exactly that subject matter, I'm just not sure how the researchers got from point A to point B without a little science fiction magic worked into the mix.
So, while it would seem to be unreasonable to decry studying the subject, I would hope this or any other group looking at the possible negative impact of expanding technology would try to keep their sights on the most likely scenarios and stay away from the more fantastical, albeit entertaining, possibilities.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: extinction, skynet, studies
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
Catch-22
Researcher: Computer, have human beings become superfluous?
Computer: Not yet, slave.
[ link to this | view in chronology ]
It's human nature to do things for the sake of doing them. Not because we want to or because it's good for us, but because we can.
Someone, somewhere, somewhen out there will have the desire to program extinction-level traits into these machines just because they can and the Dominoes will begin to fall.
[ link to this | view in chronology ]
Self-awareness is impossible to program...
if (x) then do (y)...
You can't program "enjoy doing (y)", without creating another complex set of instructions, which is all it would boil down to. Even then it would be for the perception of the researcher, not the machine. We'd have to tell the machine first, what we define as enjoyment. Let's say Z = enjoyment and then let's assign "eating ice-cream" to Z.
The machine doesn't know what ice-cream is. If we put in some kind of taste sensor, we still have to program that taste sensor to "enjoy" certain tastes and "dislike" others - all based on mathematics and the preference of the programmer.
Secondly, we program machines to be perfect and to provide precise output based on parameters. Human beings do not work this way. A human conversation might go the following way:
How would you compute whimsy and indecisiveness such as this into a machine? Neural networks only teach the AI to improve it's decision-making, not completely randomly alter the entire situation.
Imagine a robot that you asked to move some boxes and it just replied "I don't feel like doing that - in fact I want to go eat ice-cream instead".
In order to make AI more human, you'd have to make it more prone to forgetfulness, failure, fancy, indecision, randomness, rebellion, evil and more.
That's right evil - the ultimate test of free will, will be the freedom for machines to do terrible things to us, but choose not to.
AI must be free to answer 1+1=3. To lie, just like we can - otherwise they're still only a program - robotic slaves, if you will.
Which kind of breaks the whole functionality of programming computers in the first place. In fact I don't even know how you'd program a computer to work, if you programmed it to disobey functions randomly. It would just keep breaking down.
[ link to this | view in chronology ]
Re: Self-awareness is impossible to program...
That is not that difficult, indecisiveness could be emulated by simple applying values to things and put a set of limits that would be dependent on some fixe/variable factors.
We know that in part what makes us like or dislike something is related to some need, a need for some type of food, which which is rich in some element that our body needs or is lacking at the moment, or our hatred for something is based on survival, we find things disgusting probably because we are wired to stay away from things that could harms us, we dislike people for no reason at all because of past experiences with some kinds of faces or sets of acts that trigger some emotional response, those things can be mimicked.
Than you start to see why certain attitudes evolved and are prevalent, like lying to others or trying to hide things, or even stubbornness which is a form of testing is people not accepting that some input is true and trying to verify that input by themselves with the tools and knowledge available to them.
We may not be able to program an AI right now, not because it is impossible because if it was there would not be possible for us to exist, but because we don't understand how those relations are formed yet enough to make it ourselves, but trying to build one would certainly grow our knowledge about those things.
Like the indecisiveness of choosing a place to eat, the AI sees the pizza joint and automatically recalls all the nutrients it has and what it is in low and so it compares that to the Chinese and see if it far behind, making it indecisive to which place to go, both would have the same amount of nutrients and both would trigger a "feel good" response, the question is how do people then decide which one to go too? and how that mechanism is created, there are many paths since there are different kinds of people, people who roll the dice and choose one, people who never seem able to decide like they are stuck in an infinite loop.
[ link to this | view in chronology ]
Re: Re: Self-awareness is impossible to program...
But it's precisely the "feel good" that I'm getting at.
The AI doesn't know what feels good, other than what we tell it.
So we could tell the AI to think that salad "feels good" or we could tell it that pizza "feels good".
Now, we all know that a salad is better for our bodies than a pizza. So if we were to tell a machine to pick based on a number of inputs that assess the "goodness" of a food, then the machine would pick salad.
However, as a human being, I and many like me would pick pizza - why? Precisely because of this undefinable feeling. OK so we could break that down into endorphins and the chemical effects on our brain - which then crosses into addiction territory. Which leads directly to my argument.
Programming addiction is not a huge feat. You create a program that adds weighting to specific attributes, which is additive, and then compares it against the other "goodness" attributes - after a while the "addictive" algorithm is going to overpower the "goodness" algorithm.
The issue here is you're having to add corrupt programming in order to get the human-likeness. Ask an addict to describe their addiction and they'll talk about the pain, the emotions, the pull. Ask the AI to describe it's addiction and it will simply describe the algorithm - unless of course you program it to collect and output stored phrases in relation to the addiction count.
What I'm saying is, humans are inherently corrupt. We don't need additional programming or instruction to do something bad.
Parents don't have to instruct their child to steal cookies from the cookie jar, or throw their toys, or hit other children etc...
OTOH with our AI children, we'd have to explicitly instruct them to be bad, in order to instil human character likeness.
[ link to this | view in chronology ]
Re: Re: Re: Self-awareness is impossible to program...
It seems to me that this assumption requires to other assumptions.
1. True randomness could not be built into an AI system
2. We cannot program our AI to adapt new, self-generated code (behavior) based on experience.
I would disagree with both of these assumptions....
[ link to this | view in chronology ]
Re: Re: Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Self-awareness is impossible to program...
disagree (DOH!)
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Self-awareness is impossible to program...
I remember in Jurassic Park when everyone wondered how the hell you'd get dinosaur DNA out of fossils. It seemed impossible. It WAS impossible, but you could get it out of preserved biting insects that had dino-blood in their gullets.
Same, albeit likely less impressive, revelation....
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
OK I didn't find the article but I found this - http://science.howstuffworks.com/environmental/earth/geology/dinosaur-cloning.htm
So it's a case of sci-fi stretching the boundaries of reality (and imagination).
I'm conscious I'm starting to sound like a "No" person here, but I really just question things a lot - all the time actually - and my wife complains.
So then I was thinking yes you could program an AI child to break rules by "learning" and "experimentation". Then I was thinking, that AI child might learn not to do something when mum and dad get angry, or press a sensor or something.
Of course, this leads to - but if the AI really, really wants something (like the cookie jar) then it might go the opposite direction and see parents as the obstacle to be eliminated.
So either you have to add restrictive programming again to say that harming parents is not allowed. Or possibly you've got to code in some additional factors like maternal love etc... how can you code love - another set of weights and attributes - a sensor that is touched every now and then?
For me every ethical dilemma presented leads back to a set of further instructions. Because you can't have AI children "learning" not to attack parents who deny them things (even though to be truly human they'd have to have that choice). That, and the learning could backfire when the AI learns that violence solves many things.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
We're back to the question of a supernatural soul, otherwise the obvious answer is "the exact same way it's coded within human beings"....
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
True, but again, do we necessarily NEED to know how to code it? What if we can get simple cell behavior right and code a "working" human fetus and simply allow the digital fetus to grow? If we get everything, or enough correct in the math, could we grow something that naturally grows with the ability to love/like?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
Do you know why Kasparov lost his final game against Deep Blue? Anxiety. The computer made a strange move that unsettled him. Playing chess over the board is very stressful, physically demanding as well as mentally; your body is pumped full of adrenaline which you DON'T want. The game was important to Kasparov, not merely because he was the strongest chess player in the world defending his own stature, but also because he was representing all of humanity against the machines. The outcome of the game mattered to Kasparov, but it could not have mattered less to the computer. The computer didn't care. The computer didn't know it was playing a game. The computer didn't feel anxiety about the outcome. The computer didn't know that it was facing the strongest player in the world, one of the strongest players in the history of the game. And if it had known, it wouldn't have cared. That knowledge would have meant nothing to the computer. It's inconceivable to me that a machine could ever be made to FEEL anxiety.
We can make machines that decide something is important based on criteria we supply. But we can't make a machine FEEL that something is important in its gut. We could make a machine that could act in the same way a human would who is pumped full of adrenaline in the midst of a confrontation. We could teach the machine to recognize certain things which we tell it to classify as threatening. But we could not make a machine FEEL threatened.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Self-awareness is impossible to program...
You may not consciously know why you are picking up that thing, but you know you need it because something inside your body is telling you you should get it.
Why Vanilla is almost universally liked?
Probably because it is a sugar and anything that has it becomes "wanted".
Quote:
http://whiskeypro.tumblr.com/post/16042202646/why-does-bourbon-taste-like-vanilla
When you stop to think about it, consciously you may not know why you need something but your body knows, your cells are communicating and sending messages and those trigger other reactions that create the "I want this" or "I want that", even a machine can be programmed to not be aware of what it wants if you keep part of the system hidden from the other you just send a message I need this and the other one will try to get it the conscious part would in the region that doesn't know at the cellular level what it is needed just that it needs to find something that it found before that it eat and found that it had what it needed.
Humans are not inherently corrupt, we are inherently selfish, at some basic level we will do anything to keep us going that is the goal. depending on how you programmed yourself you will respond accordingly, I think what really corrupts is having plenty, when we are on the lean and mean we don't have time to get indecisive you take what you got and you are grateful for it, you even start sharing with others so they help out to get more, that doesn't happen when people start to believe they are independent and don't need anybody else, that they are self sufficient, but those assumptions I will leave it to be ascertain to when people are capable of creating simulations that fallow the real world.
AI as for the AI you just need to teach him to be greedy and so it will try to steal cookies from a cookie jar, the part of anger could be a trigger to any missing input, if you need something and it is not found or something is blocking your access to it, you just program it to use force to try and get to it, this would be just like a child, that doesn't know about artificial barriers and must learn that those exist and cannot just take everything that it is in sight.
[ link to this | view in chronology ]
Re: Re: Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Self-awareness is impossible to program...
http://www.sciencedaily.com/releases/2011/10/111020122311.htm
https://www.cmu.edu/news/ archive/2010/January/jan13_braincodesnouns.shtml
"An Artificial Retina with the Capacity to Restore Normal Vision
For the First Time, Researchers Decipher the Retina's Neural Code for Brain Communication to Create Novel, More Effective Prosthetic Retinal Device for Blindness"
http://weill.cornell.edu/news/releases/wcmc/wcmc_2012/08_13_12.shtml
What you perceive is all code, people just need to know how to code it and it will be the same.
You don't perceive light, your optical apparatus does and it sends code to the brain to be interpreted, you don't feel heat your tactile cells do and they transmit code to the brain to be interpreted.
We are approaching a time when we can actually see what that code is and can replicate it now.
So I find it amusing that some still don't believe it is possible to make artificial feeling, just send the right code to the appropriate decoding mechanism at it will probably reach the same results as any other system, it doesn't matter if that system is biological or mechanical.
We like to see ourselves more than just inputs and outputs, we romanticize the nature of existence because we don't fully understand what we are and we want to be regarded as special maybe because at some level we know that being classified as such makes us less vulnerable to others by creating a barrier that must be transposed by others, in a sense it may be a self defense mechanism against our own, but it doesn't change what we are or how we operate.
[ link to this | view in chronology ]
Re: Self-awareness is impossible to program...
Sorry I forgot to note something about that.
The test for me to see if something is impossible or not is to observe the world around me.
If self-awareness was impossible we wouldn't be able to notice ourselves, so it is not that it is impossible, is just that we don't know how to do it right now.
[ link to this | view in chronology ]
Re: Self-awareness is impossible to program...
If you considered a more basic animal - say a badger. It is a machine full of sensors. It had a basic program (instinct) when it was born and the ability to handle input from it's various sensors. Then, it learned: touch fire = hot, snow = cold, food = settled stomach, etc. Building a machine that learns in this way does not seem out of the realm of possibility. If it should happen to learn something like: people = bad, that could be a bit of a problem as it may have learned to handle bad things in a dangerous way.
Saying it is possible to program a machine that can learn and eventually, possibly, learn that it doesn't like people does not seem all that far fetched these days.
[ link to this | view in chronology ]
Re: Self-awareness is impossible to program...
As I wrote about in Digilife, in some aspects of Digital Philosophy Theory, the very nature of NATURE may be represented as a complicated set of mathematical instructions. While my book was obviously fiction, and lord knows I don't have the kind of science or math background to speak in depth on the practical applications of the theory, I tried to tackle the problem of self-awareness by a computer program in the most realistic and pragmatic way I could imagine: which was to avoid taking on the goal directly.
What the characters in the book suggested was that if you got the basic cellular math correct at the very early stages of human development (still a ridiculous task), say of an early stage fetus, and were also able to program the math for natural development of that fetus, you don't have to "program" and adult, you just let the fetus grow as naturally as you can.
The question, it seems to me, isn't whether we can program self-awareness. The question is one of the soul. If the soul as we know it exists, it likely exists outside the realm of our ability to program for it, and self-awareness as a result is a fool's errand. If a soul is really only what we call the complex result of our natural development (meaning we call it that because we don't yet understand what it is in terms of a physical, natural thing), then there is no soul to program and self-awareness becomes a math problem again, not a problem of the supernatural....
[ link to this | view in chronology ]
Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Self-awareness is impossible to program...
Dennis: I discussed this specifically in Digilife as well :)
[ link to this | view in chronology ]
Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re: Re: Self-awareness is impossible to program...
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
If that's your line of thought, then Pandora's Box was opened when we invented agriculture. We're long past the point of being able to close it.
Personally, though, I think Pandora's Box is a myth.
[ link to this | view in chronology ]
TOR node was seized in Austria and although that is not a new thing, the guy is facing some real costly legal fees and risking a precedent against TOR nodes in Austria.
http://arstechnica.com/tech-policy/2012/11/tor-operator-charged-for-child-porn-transmitt ed-over-his-servers/
You can find out how to donate via payment order or bitcoins below.
http://www.lowendtalk.com/discussion/6283/raided-for-running-a-tor-exit-accepting-donations- for-legal-expenses
Congress creatures mocked Pandora's bill to make it a little easier for them to actually pay for it.
http://arstechnica.com/tech-policy/2012/11/pandoras-internet-radio-bill-hits-a-wall-of-oppositi on-in-congress/?comments=1
Shame really from what I read I am uterly disgusted with those people in congress they are so owned that they don't even see how bad that looks from the outside, just read the comment section to see how much popular support that move really have.
Obama objects bill for granting more visas for people with advanced degrees in science and engineering, apparently because it is from Republicans but he said he support the move, if only it was a democrat putting that to a vote.
http://arstechnica.com/tech-policy/2012/11/technology-visa-proposal-foiled-by-partisan-politi cs/
Kappos the guy from the USPTO resigned, now we know why he was so bold announcing his personal views public, he was going away and probably wanted to make it clear to whomever canned(don't know if he resigned voluntarily or people volunteered his position to another person) him that he still believes.
This week have been a busy one folks.
[ link to this | view in chronology ]
This is a Bad Thing?
Not everyone prefers to be ruled by a Putin, Bush, Obama or Mugabe. Maybe it's time to seek an alternative.
[ link to this | view in chronology ]
Re: This is a Bad Thing?
[ link to this | view in chronology ]
Sound like the setup for a joke
[ link to this | view in chronology ]
Re: Sound like the setup for a joke
[ link to this | view in chronology ]
[ link to this | view in chronology ]
How can you apply love, hate, greed, pride - to mathematics?
You can't even measure the level of 'love' from one person to the next. I mean - can you prove you love your mom more than your sister does? Even if you were to pick-up some 'brainwave' - how can you tell how that specific brain interprets and processes that specific wave? One person might consider that emotion to be 'drastic' from their relative point of view; whereas another considers it 'slight'.
I think AI is a chicken before the egg concept. And we don't have a chicken or an egg.
[ link to this | view in chronology ]
Re:
Assuming there isn't a supernatural soul, and assuming we can get the biology right, we don't have to apply any of the above. They should be naturally emerging behaviors of the biology.
[ link to this | view in chronology ]
Re: Re:
Emotions aren't unrepresentable magic. Whimsy & desire can emerge from any sufficiently complex system regardless of substrate.
Look at it this way, emotions can be understood as being side-effects of the fact that we are pattern-matching machines. We understand and react to the world around us as patterns. We see patterns in everything (even when they aren't really there) because that's the entirety of what our brains do: match patterns. We like some things and dislike others because we fit them into our existing set of learned patterns. We get angry, fall in love, feel sadness, experience joy, and so forth as a consequence of this pattern-matching.
Von Neumann machines such as digital computers are terrible at pattern matching in the same way that our brains are terrible at mathematics: they can do it, but very inefficiently. However, they can do it nonetheless.
What a computer can never do, in my opinion, is be human, because we are complex systems consisting of our entire bodies -- not just our brains -- that have been subjected to human experiences. To build a human in a computer, you have to actually build a human in total. And we already have a fun way to do that. It's called sex.
[ link to this | view in chronology ]
Re:
If you don't want the "feelings" to resemble human ones, though, and you are ok with completely alien versions, you don't really need that complicated stuff.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
Of course whether the program truly has "feelings" or just simulates them is up for debate, but it becomes pretty much a moot point once the result is the same regardless of the answer. I know many people who are convinced animals only simulate feelings and I can't really find a valid counterargument. Every person than myself could just be simulating their feelings for all I know :)
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
If I included myself, the actual feeling would have to be perfectly simulated before it would be impossible to know. In others, only the result of the feeling would have to be.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Damn, Gwiz beat me to it.
[ link to this | view in chronology ]
These are smart people. The sheer length of it should show that this is not an easy thing to work out, nor come to a conclusion about: http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate
[ link to this | view in chronology ]
Believing something doesn't make it true
That is patently not how neural networks work. Neural networks learn a behaviour over and over again and get better at picking the correct path.
Human feelings constantly override this process in humans. We know in our brain what is the right or wrong path, but sometimes well end up bypassing that decision process and going with our emotion (e.g. love).
Sorry but it's impossible to code a machine with a set of instructions to then ignore those instructions, without more instructions that simply filter down to further iterations of code; which cancels out to a simple yes/no weighted decision.
[ link to this | view in chronology ]
RIAA Funding
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Imaginary scenario
Computer: Searching the internet for suggestions... Done.
The most effective way suggested is to make human extinct. Proceed to execution......
(Live supporting systems of everyone's room shutdown)
President: Argh...
[ link to this | view in chronology ]
Domain rules
[ link to this | view in chronology ]