Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct

from the but-will-skynet-let-it-happen? dept

As the march of technology progresses, folks are coming up with all kinds of interesting questions regarding the machines we use every day. I wrote a while back about a one researcher questioning whether or not robots deserve rights, for instance. On the flip side of the benevolence coin, I also had the distinct pleasure of discussing one sports journalist's opinion that we had to outlaw American football as we know it today for the obvious reason that the machines are preparing to take over and s#@% is about to get real.

Hyperbole aside, one group is proposing a more reasonable, nuanced platform to study possible pitfalls regarding technology and mankind's dominance over it.
A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose "extinction-level" risks to our species.
Now, it would be quite easy to simply have a laugh at this proposal while writing off concerns about extinction-level technological disasters as being the thing of science fiction movies, and to some extent I wouldn't disagree with that notion, but this group certainly does appear to be keeping a level head about the subject. There doesn't seem to be a great deal of fear-mongering coming out of group, unlike what we see in cybersecurity debates, and the founding members of the group aren't exactly luddites. That said, even some of the group's members seem to realize how far-fetched this all sounds, such as Huw Price, the Bertrand Russell Professor of Philosophy and one of the group's founding members.
"Nature didn't anticipate us, and we in our turn shouldn't take AGI for granted. We need to take seriously the possibility that there might be a "Pandora's box" moment with AGI that, if missed, could be disastrous. I don't mean that we can predict this with certainty, no one is presently in a position to do that, but that's the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies."
Unfortunately, the reasonable nature of Price's wish to simply study the potential of a problem does indeed lead to what seems to be laughable worries. For example, Price goes on to worry that an explosion in computing power and the possibility of software writing new software will relegate humanity to the back burner in competition with machines for global resources. My issue is that these researchers appear to equate intelligence with consciousness. Or, at the very least, they assume that a machine as intelligent as or even more intelligent than a human being will also have a human's motivation for dominance, expansion, or procreation (as in writing new software or creating more machines). Following the story logically, and having written a fictional novel discussing exactly that subject matter, I'm just not sure how the researchers got from point A to point B without a little science fiction magic worked into the mix.

So, while it would seem to be unreasonable to decry studying the subject, I would hope this or any other group looking at the possible negative impact of expanding technology would try to keep their sights on the most likely scenarios and stay away from the more fantastical, albeit entertaining, possibilities.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: extinction, skynet, studies


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Jake, 30 Nov 2012 @ 12:19am

    Well, better to have it and not need it than the other way round, I suppose.

    link to this | view in thread ]

  2. identicon
    Yogi, 30 Nov 2012 @ 12:49am

    Catch-22

    Obviously they will need a super-computer or AI to figure out all the implications of these new technologies:

    Researcher: Computer, have human beings become superfluous?
    Computer: Not yet, slave.

    link to this | view in thread ]

  3. icon
    Keii (profile), 30 Nov 2012 @ 1:44am

    Once we get to the point where we can create a machine that has self-awareness, Pandora's Box shall open and there's no way to close it.
    It's human nature to do things for the sake of doing them. Not because we want to or because it's good for us, but because we can.
    Someone, somewhere, somewhen out there will have the desire to program extinction-level traits into these machines just because they can and the Dominoes will begin to fall.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 30 Nov 2012 @ 3:05am

    First people should propose a center to study sock puppets in politics those are more likely to lead to extinction type events sooner than later I believe.

    TOR node was seized in Austria and although that is not a new thing, the guy is facing some real costly legal fees and risking a precedent against TOR nodes in Austria.

    http://arstechnica.com/tech-policy/2012/11/tor-operator-charged-for-child-porn-transmitt ed-over-his-servers/

    You can find out how to donate via payment order or bitcoins below.
    http://www.lowendtalk.com/discussion/6283/raided-for-running-a-tor-exit-accepting-donations- for-legal-expenses

    Congress creatures mocked Pandora's bill to make it a little easier for them to actually pay for it.
    http://arstechnica.com/tech-policy/2012/11/pandoras-internet-radio-bill-hits-a-wall-of-oppositi on-in-congress/?comments=1

    Shame really from what I read I am uterly disgusted with those people in congress they are so owned that they don't even see how bad that looks from the outside, just read the comment section to see how much popular support that move really have.

    Obama objects bill for granting more visas for people with advanced degrees in science and engineering, apparently because it is from Republicans but he said he support the move, if only it was a democrat putting that to a vote.
    http://arstechnica.com/tech-policy/2012/11/technology-visa-proposal-foiled-by-partisan-politi cs/

    Kappos the guy from the USPTO resigned, now we know why he was so bold announcing his personal views public, he was going away and probably wanted to make it clear to whomever canned(don't know if he resigned voluntarily or people volunteered his position to another person) him that he still believes.

    This week have been a busy one folks.

    link to this | view in thread ]

  5. icon
    nospacesorspecialcharacters (profile), 30 Nov 2012 @ 3:33am

    Self-awareness is impossible to program...

    By their very nature, programs are sets of mathematical instructions.

    if (x) then do (y)...

    You can't program "enjoy doing (y)", without creating another complex set of instructions, which is all it would boil down to. Even then it would be for the perception of the researcher, not the machine. We'd have to tell the machine first, what we define as enjoyment. Let's say Z = enjoyment and then let's assign "eating ice-cream" to Z.


    Researcher: Do you enjoy doing (y)?
    AI: (z="eating ice-cream"); if (y = z); then: Yes.


    The machine doesn't know what ice-cream is. If we put in some kind of taste sensor, we still have to program that taste sensor to "enjoy" certain tastes and "dislike" others - all based on mathematics and the preference of the programmer.

    Secondly, we program machines to be perfect and to provide precise output based on parameters. Human beings do not work this way. A human conversation might go the following way:


    Wife: Do you want lasagne or curry for dinner?
    Husband: Curry... wait, screw that let's go for a Chinese.
    (on the way to the restaurant, husband sees the Italian restaurant, remembers the great pizza he had there and suddenly decides that they should stop and eat there instead).


    How would you compute whimsy and indecisiveness such as this into a machine? Neural networks only teach the AI to improve it's decision-making, not completely randomly alter the entire situation.

    Imagine a robot that you asked to move some boxes and it just replied "I don't feel like doing that - in fact I want to go eat ice-cream instead".

    In order to make AI more human, you'd have to make it more prone to forgetfulness, failure, fancy, indecision, randomness, rebellion, evil and more.

    That's right evil - the ultimate test of free will, will be the freedom for machines to do terrible things to us, but choose not to.

    AI must be free to answer 1+1=3. To lie, just like we can - otherwise they're still only a program - robotic slaves, if you will.

    Which kind of breaks the whole functionality of programming computers in the first place. In fact I don't even know how you'd program a computer to work, if you programmed it to disobey functions randomly. It would just keep breaking down.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 30 Nov 2012 @ 4:24am

    Re: Self-awareness is impossible to program...

    Quote:
    How would you compute whimsy and indecisiveness such as this into a machine?


    That is not that difficult, indecisiveness could be emulated by simple applying values to things and put a set of limits that would be dependent on some fixe/variable factors.

    We know that in part what makes us like or dislike something is related to some need, a need for some type of food, which which is rich in some element that our body needs or is lacking at the moment, or our hatred for something is based on survival, we find things disgusting probably because we are wired to stay away from things that could harms us, we dislike people for no reason at all because of past experiences with some kinds of faces or sets of acts that trigger some emotional response, those things can be mimicked.

    Than you start to see why certain attitudes evolved and are prevalent, like lying to others or trying to hide things, or even stubbornness which is a form of testing is people not accepting that some input is true and trying to verify that input by themselves with the tools and knowledge available to them.

    We may not be able to program an AI right now, not because it is impossible because if it was there would not be possible for us to exist, but because we don't understand how those relations are formed yet enough to make it ourselves, but trying to build one would certainly grow our knowledge about those things.

    Like the indecisiveness of choosing a place to eat, the AI sees the pizza joint and automatically recalls all the nutrients it has and what it is in low and so it compares that to the Chinese and see if it far behind, making it indecisive to which place to go, both would have the same amount of nutrients and both would trigger a "feel good" response, the question is how do people then decide which one to go too? and how that mechanism is created, there are many paths since there are different kinds of people, people who roll the dice and choose one, people who never seem able to decide like they are stuck in an infinite loop.

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 30 Nov 2012 @ 4:30am

    Re: Self-awareness is impossible to program...

    Quote:
    Self-awareness is impossible to program...

    Sorry I forgot to note something about that.
    The test for me to see if something is impossible or not is to observe the world around me.

    If self-awareness was impossible we wouldn't be able to notice ourselves, so it is not that it is impossible, is just that we don't know how to do it right now.

    link to this | view in thread ]

  8. identicon
    Michael, 30 Nov 2012 @ 4:30am

    Re:

    you, however, will be safe in your tin foil hat.

    link to this | view in thread ]

  9. identicon
    Michael, 30 Nov 2012 @ 4:39am

    Re: Self-awareness is impossible to program...

    You are assuming there is something other than information gathering that creates self-awareness. I am not going to debate you on that - people have been asking that question for thousands of years and have no answer. What we do know is that it seems possible these days that self-awareness, or at least "life awareness" (say, that of basic animals) does appear to possibly be the result of simple trial and error.

    If you considered a more basic animal - say a badger. It is a machine full of sensors. It had a basic program (instinct) when it was born and the ability to handle input from it's various sensors. Then, it learned: touch fire = hot, snow = cold, food = settled stomach, etc. Building a machine that learns in this way does not seem out of the realm of possibility. If it should happen to learn something like: people = bad, that could be a bit of a problem as it may have learned to handle bad things in a dangerous way.

    Saying it is possible to program a machine that can learn and eventually, possibly, learn that it doesn't like people does not seem all that far fetched these days.

    link to this | view in thread ]

  10. icon
    nospacesorspecialcharacters (profile), 30 Nov 2012 @ 5:37am

    Re: Re: Self-awareness is impossible to program...


    the AI sees the pizza joint and automatically recalls all the nutrients it has and what it is in low and so it compares that to the Chinese and see if it far behind, making it indecisive to which place to go, both would have the same amount of nutrients and both would trigger a "feel good" response"


    But it's precisely the "feel good" that I'm getting at.

    The AI doesn't know what feels good, other than what we tell it.

    So we could tell the AI to think that salad "feels good" or we could tell it that pizza "feels good".

    Now, we all know that a salad is better for our bodies than a pizza. So if we were to tell a machine to pick based on a number of inputs that assess the "goodness" of a food, then the machine would pick salad.

    However, as a human being, I and many like me would pick pizza - why? Precisely because of this undefinable feeling. OK so we could break that down into endorphins and the chemical effects on our brain - which then crosses into addiction territory. Which leads directly to my argument.

    Programming addiction is not a huge feat. You create a program that adds weighting to specific attributes, which is additive, and then compares it against the other "goodness" attributes - after a while the "addictive" algorithm is going to overpower the "goodness" algorithm.

    The issue here is you're having to add corrupt programming in order to get the human-likeness. Ask an addict to describe their addiction and they'll talk about the pain, the emotions, the pull. Ask the AI to describe it's addiction and it will simply describe the algorithm - unless of course you program it to collect and output stored phrases in relation to the addiction count.

    What I'm saying is, humans are inherently corrupt. We don't need additional programming or instruction to do something bad.

    Parents don't have to instruct their child to steal cookies from the cookie jar, or throw their toys, or hit other children etc...

    OTOH with our AI children, we'd have to explicitly instruct them to be bad, in order to instil human character likeness.

    link to this | view in thread ]

  11. icon
    Dark Helmet (profile), 30 Nov 2012 @ 5:40am

    Re: Self-awareness is impossible to program...

    "By their very nature, programs are sets of mathematical instructions."

    As I wrote about in Digilife, in some aspects of Digital Philosophy Theory, the very nature of NATURE may be represented as a complicated set of mathematical instructions. While my book was obviously fiction, and lord knows I don't have the kind of science or math background to speak in depth on the practical applications of the theory, I tried to tackle the problem of self-awareness by a computer program in the most realistic and pragmatic way I could imagine: which was to avoid taking on the goal directly.

    What the characters in the book suggested was that if you got the basic cellular math correct at the very early stages of human development (still a ridiculous task), say of an early stage fetus, and were also able to program the math for natural development of that fetus, you don't have to "program" and adult, you just let the fetus grow as naturally as you can.

    The question, it seems to me, isn't whether we can program self-awareness. The question is one of the soul. If the soul as we know it exists, it likely exists outside the realm of our ability to program for it, and self-awareness as a result is a fool's errand. If a soul is really only what we call the complex result of our natural development (meaning we call it that because we don't yet understand what it is in terms of a physical, natural thing), then there is no soul to program and self-awareness becomes a math problem again, not a problem of the supernatural....

    link to this | view in thread ]

  12. icon
    Dark Helmet (profile), 30 Nov 2012 @ 5:44am

    Re: Re: Re: Self-awareness is impossible to program...

    "OTOH with our AI children, we'd have to explicitly instruct them to be bad, in order to instil human character likeness."

    It seems to me that this assumption requires to other assumptions.

    1. True randomness could not be built into an AI system

    2. We cannot program our AI to adapt new, self-generated code (behavior) based on experience.

    I would disagree with both of these assumptions....

    link to this | view in thread ]

  13. icon
    nospacesorspecialcharacters (profile), 30 Nov 2012 @ 5:49am

    Re: Re: Re: Re: Self-awareness is impossible to program...

    I would agree with those 2 assumptions too, since I studied neural networks at university, and I code for a living.

    link to this | view in thread ]

  14. icon
    nospacesorspecialcharacters (profile), 30 Nov 2012 @ 5:50am

    Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    *agree

    disagree (DOH!)

    link to this | view in thread ]

  15. icon
    Dark Helmet (profile), 30 Nov 2012 @ 5:55am

    Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    Well, then I'd lean on you for these kinds of discussions, since I was a psych major and sure as shit don't have the biology or CS background to actually do the damned work, but I saw this as a end-around the problem (meaning to "grow" the program as you would a fetus).

    I remember in Jurassic Park when everyone wondered how the hell you'd get dinosaur DNA out of fossils. It seemed impossible. It WAS impossible, but you could get it out of preserved biting insects that had dino-blood in their gullets.

    Same, albeit likely less impressive, revelation....

    link to this | view in thread ]

  16. icon
    nospacesorspecialcharacters (profile), 30 Nov 2012 @ 6:15am

    Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    Yeah I read something recently about the dinosaur DNA - that it's almost impossible to clone dinosaurs from it...

    OK I didn't find the article but I found this - http://science.howstuffworks.com/environmental/earth/geology/dinosaur-cloning.htm

    So it's a case of sci-fi stretching the boundaries of reality (and imagination).

    I'm conscious I'm starting to sound like a "No" person here, but I really just question things a lot - all the time actually - and my wife complains.

    So then I was thinking yes you could program an AI child to break rules by "learning" and "experimentation". Then I was thinking, that AI child might learn not to do something when mum and dad get angry, or press a sensor or something.

    Of course, this leads to - but if the AI really, really wants something (like the cookie jar) then it might go the opposite direction and see parents as the obstacle to be eliminated.

    So either you have to add restrictive programming again to say that harming parents is not allowed. Or possibly you've got to code in some additional factors like maternal love etc... how can you code love - another set of weights and attributes - a sensor that is touched every now and then?

    For me every ethical dilemma presented leads back to a set of further instructions. Because you can't have AI children "learning" not to attack parents who deny them things (even though to be truly human they'd have to have that choice). That, and the learning could backfire when the AI learns that violence solves many things.

    link to this | view in thread ]

  17. identicon
    Applesauce, 30 Nov 2012 @ 6:25am

    This is a Bad Thing?

    This group makes the fundamental assumption that dominance of humanity by machine intelligence is automatically a 'Bad' thing. Having looked at their proposal, I don't see WHY they believe that.

    Not everyone prefers to be ruled by a Putin, Bush, Obama or Mugabe. Maybe it's time to seek an alternative.

    link to this | view in thread ]

  18. icon
    Dark Helmet (profile), 30 Nov 2012 @ 6:38am

    Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    "how can you code love"

    We're back to the question of a supernatural soul, otherwise the obvious answer is "the exact same way it's coded within human beings"....

    link to this | view in thread ]

  19. icon
    aeortiz (profile), 30 Nov 2012 @ 6:52am

    Sound like the setup for a joke

    "A philosopher, a scientist and a software engineer" walk into a bar...

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 30 Nov 2012 @ 7:01am

    Re: Self-awareness is impossible to program...

    You seem to be under the impression that our brains are anything other than a complex set of instructions.

    link to this | view in thread ]

  21. identicon
    dennis deems, 30 Nov 2012 @ 7:06am

    Re: Sound like the setup for a joke

    My thought exactly, but to me it has more the flavor of a "were stranded in the desert" joke.

    link to this | view in thread ]

  22. identicon
    Anonymous Coward, 30 Nov 2012 @ 7:23am

    Re: Re: Self-awareness is impossible to program...

    I see no reason to assume the existence of a "soul," given the complete lack of evidence indicating such a thing exists.

    link to this | view in thread ]

  23. icon
    crade (profile), 30 Nov 2012 @ 7:25am

    Am I the only one who read the title first as The proposed center that could make humans extinct?

    link to this | view in thread ]

  24. icon
    Dark Helmet (profile), 30 Nov 2012 @ 7:37am

    Re: Re: Re: Self-awareness is impossible to program...

    I would tend to agree, in which case, at some level, we're all just representations of mathematics and that can certainly be programmed...

    link to this | view in thread ]

  25. identicon
    dennis deems, 30 Nov 2012 @ 7:58am

    Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    This assumes that we know how love is coded within human beings. We don't, and likely never will. We don't even really know how like is coded.

    link to this | view in thread ]

  26. icon
    Overcast (profile), 30 Nov 2012 @ 8:28am

    "I would tend to agree, in which case, at some level, we're all just representations of mathematics and that can certainly be programmed..."

    How can you apply love, hate, greed, pride - to mathematics?

    You can't even measure the level of 'love' from one person to the next. I mean - can you prove you love your mom more than your sister does? Even if you were to pick-up some 'brainwave' - how can you tell how that specific brain interprets and processes that specific wave? One person might consider that emotion to be 'drastic' from their relative point of view; whereas another considers it 'slight'.

    I think AI is a chicken before the egg concept. And we don't have a chicken or an egg.

    link to this | view in thread ]

  27. icon
    Dark Helmet (profile), 30 Nov 2012 @ 8:28am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    "This assumes that we know how love is coded within human beings. We don't, and likely never will. We don't even really know how like is coded."

    True, but again, do we necessarily NEED to know how to code it? What if we can get simple cell behavior right and code a "working" human fetus and simply allow the digital fetus to grow? If we get everything, or enough correct in the math, could we grow something that naturally grows with the ability to love/like?

    link to this | view in thread ]

  28. icon
    Dark Helmet (profile), 30 Nov 2012 @ 8:39am

    Re:

    "How can you apply love, hate, greed, pride - to mathematics?"

    Assuming there isn't a supernatural soul, and assuming we can get the biology right, we don't have to apply any of the above. They should be naturally emerging behaviors of the biology.

    link to this | view in thread ]

  29. identicon
    dennis deems, 30 Nov 2012 @ 9:11am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    could we grow something that naturally grows with the ability to love/like?

    Do you know why Kasparov lost his final game against Deep Blue? Anxiety. The computer made a strange move that unsettled him. Playing chess over the board is very stressful, physically demanding as well as mentally; your body is pumped full of adrenaline which you DON'T want. The game was important to Kasparov, not merely because he was the strongest chess player in the world defending his own stature, but also because he was representing all of humanity against the machines. The outcome of the game mattered to Kasparov, but it could not have mattered less to the computer. The computer didn't care. The computer didn't know it was playing a game. The computer didn't feel anxiety about the outcome. The computer didn't know that it was facing the strongest player in the world, one of the strongest players in the history of the game. And if it had known, it wouldn't have cared. That knowledge would have meant nothing to the computer. It's inconceivable to me that a machine could ever be made to FEEL anxiety.

    We can make machines that decide something is important based on criteria we supply. But we can't make a machine FEEL that something is important in its gut. We could make a machine that could act in the same way a human would who is pumped full of adrenaline in the midst of a confrontation. We could teach the machine to recognize certain things which we tell it to classify as threatening. But we could not make a machine FEEL threatened.

    link to this | view in thread ]

  30. identicon
    dennis deems, 30 Nov 2012 @ 9:13am

    Re: This is a Bad Thing?

    You first.

    link to this | view in thread ]

  31. icon
    Gwiz (profile), 30 Nov 2012 @ 9:16am

    A philosopher, a scientist and a software engineer... walk into a bar and the bartender says "Hey, what is this? Some kind of joke?"

    link to this | view in thread ]

  32. icon
    Jeffrey Nonken (profile), 30 Nov 2012 @ 9:37am

    Re:

    A philosopher, a scientist and a software engineer walk into a bar...

    Damn, Gwiz beat me to it.

    link to this | view in thread ]

  33. identicon
    Anonymous Coward, 30 Nov 2012 @ 9:40am

    Re: Re: Re: Self-awareness is impossible to program...

    Humans would pick pizza because it is full of glucose which is something the body needs and learned that it should get as much as it can no matter what, you must remember that our bodies still function according to how we lived thousands of years ago we didn't change that much from an evolutionary point of view, so our bodies still do what they did before which is accumulate as much sugars as they can for the though times.

    You may not consciously know why you are picking up that thing, but you know you need it because something inside your body is telling you you should get it.

    Why Vanilla is almost universally liked?
    Probably because it is a sugar and anything that has it becomes "wanted".

    Quote:
    “Vanillin”, C8H8O3, is found in nature not only in wood but also in vanilla beans, maple syrup, banans and even butter

    http://whiskeypro.tumblr.com/post/16042202646/why-does-bourbon-taste-like-vanilla

    When you stop to think about it, consciously you may not know why you need something but your body knows, your cells are communicating and sending messages and those trigger other reactions that create the "I want this" or "I want that", even a machine can be programmed to not be aware of what it wants if you keep part of the system hidden from the other you just send a message I need this and the other one will try to get it the conscious part would in the region that doesn't know at the cellular level what it is needed just that it needs to find something that it found before that it eat and found that it had what it needed.

    Humans are not inherently corrupt, we are inherently selfish, at some basic level we will do anything to keep us going that is the goal. depending on how you programmed yourself you will respond accordingly, I think what really corrupts is having plenty, when we are on the lean and mean we don't have time to get indecisive you take what you got and you are grateful for it, you even start sharing with others so they help out to get more, that doesn't happen when people start to believe they are independent and don't need anybody else, that they are self sufficient, but those assumptions I will leave it to be ascertain to when people are capable of creating simulations that fallow the real world.

    AI as for the AI you just need to teach him to be greedy and so it will try to steal cookies from a cookie jar, the part of anger could be a trigger to any missing input, if you need something and it is not found or something is blocking your access to it, you just program it to use force to try and get to it, this would be just like a child, that doesn't know about artificial barriers and must learn that those exist and cannot just take everything that it is in sight.

    link to this | view in thread ]

  34. identicon
    Anonymous Coward, 30 Nov 2012 @ 9:47am

    Re: Re: Re: Re: Self-awareness is impossible to program...

    disappointingly shallow.

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 30 Nov 2012 @ 9:47am

    Re: Re: Self-awareness is impossible to program...

    Very complex set of instructions interacting with several subsystems that have their own operational systems.

    link to this | view in thread ]

  36. identicon
    dennis deems, 30 Nov 2012 @ 9:54am

    Re: Re: Re: Re: Self-awareness is impossible to program...

    we're all just representations of mathematics
    Except that we have the capacity to make art. We step back from the canvas and say "more red... and a little more red". How could a machine ever say to itself "a little more red"?

    link to this | view in thread ]

  37. icon
    crade (profile), 30 Nov 2012 @ 9:55am

    Re:

    With AI programming, the premise is basically the same as the "teach a man to fish" idea. You don't need to program "love, hate, etc" into the machine, you need to program the ability for the program to learn and evolve based on it's experience (which we can already do to some extent, just not well enough yet) and the program will then develop it's own version of feelings based on those experiences. If you wanted those "feelings" to mimick human ones, you would also have to program in some (as many as you could) of the parameters that occur naturally in humans to make us develop those feelings like physical pain, senses that mimmick human ones, discomfort to extreme sensory input, and such.

    If you don't want the "feelings" to resemble human ones, though, and you are ok with completely alien versions, you don't really need that complicated stuff.

    link to this | view in thread ]

  38. identicon
    dennis deems, 30 Nov 2012 @ 10:07am

    Re: Re:

    You're confusing behavior with feeling.

    link to this | view in thread ]

  39. icon
    crade (profile), 30 Nov 2012 @ 10:10am

    Re: Re: Re:

    No, the behavior is the action the program takes, and the feeling is the reason the program took that action.

    link to this | view in thread ]

  40. icon
    Dark Helmet (profile), 30 Nov 2012 @ 10:19am

    Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    "Except that we have the capacity to make art. We step back from the canvas and say "more red... and a little more red". How could a machine ever say to itself "a little more red"?"

    Dennis: I discussed this specifically in Digilife as well :)

    link to this | view in thread ]

  41. icon
    crade (profile), 30 Nov 2012 @ 10:22am

    Re: Re: Re: Re:

    The actual decision making process needs to be able to evolve based on experience.

    Of course whether the program truly has "feelings" or just simulates them is up for debate, but it becomes pretty much a moot point once the result is the same regardless of the answer. I know many people who are convinced animals only simulate feelings and I can't really find a valid counterargument. Every person than myself could just be simulating their feelings for all I know :)

    link to this | view in thread ]

  42. identicon
    dennis deems, 30 Nov 2012 @ 11:44am

    Re: Re: Re: Re:

    A branch on a decision tree isn't a feeling.

    link to this | view in thread ]

  43. identicon
    dennis deems, 30 Nov 2012 @ 11:45am

    Re: Re: Re: Re: Re:

    Why exclude yourself? If you don't know the difference between a decision and a feeling, that could suggest you've never experienced the latter.

    link to this | view in thread ]

  44. icon
    John Fenderson (profile), 30 Nov 2012 @ 12:47pm

    Re:

    Once we get to the point where we can create a machine that has self-awareness, Pandora's Box shall open and there's no way to close it.


    If that's your line of thought, then Pandora's Box was opened when we invented agriculture. We're long past the point of being able to close it.

    Personally, though, I think Pandora's Box is a myth.

    link to this | view in thread ]

  45. icon
    John Fenderson (profile), 30 Nov 2012 @ 1:01pm

    Re: Re:

    Yes, this.

    Emotions aren't unrepresentable magic. Whimsy & desire can emerge from any sufficiently complex system regardless of substrate.

    Look at it this way, emotions can be understood as being side-effects of the fact that we are pattern-matching machines. We understand and react to the world around us as patterns. We see patterns in everything (even when they aren't really there) because that's the entirety of what our brains do: match patterns. We like some things and dislike others because we fit them into our existing set of learned patterns. We get angry, fall in love, feel sadness, experience joy, and so forth as a consequence of this pattern-matching.

    Von Neumann machines such as digital computers are terrible at pattern matching in the same way that our brains are terrible at mathematics: they can do it, but very inefficiently. However, they can do it nonetheless.

    What a computer can never do, in my opinion, is be human, because we are complex systems consisting of our entire bodies -- not just our brains -- that have been subjected to human experiences. To build a human in a computer, you have to actually build a human in total. And we already have a fun way to do that. It's called sex.

    link to this | view in thread ]

  46. icon
    crade (profile), 30 Nov 2012 @ 1:23pm

    Re: Re: Re: Re: Re: Re:

    The reason to exclude yourself is because you experience the feeling instead of just the result.
    If I included myself, the actual feeling would have to be perfectly simulated before it would be impossible to know. In others, only the result of the feeling would have to be.

    link to this | view in thread ]

  47. identicon
    Colin, 30 Nov 2012 @ 7:32pm

    Here's a debate on AI by Robin Hanson, professor of economics at George Mason U., and Elizier Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence, who has set up a site and community dealing with the tricky problem of codifying and optimising human behaviour, so we know what to work from when coding an AI.

    These are smart people. The sheer length of it should show that this is not an easy thing to work out, nor come to a conclusion about: http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate

    link to this | view in thread ]

  48. identicon
    Anonymous Coward, 1 Dec 2012 @ 12:44am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    "We" may not be able but surely it is possible otherwise "we" wouldn't be having this conversation at all since none of us would be "feeling" anything and it is replicable at least 7 billion times.

    link to this | view in thread ]

  49. identicon
    Anonymous Coward, 1 Dec 2012 @ 1:14am

    Re: Re: Re: Re: Re: Self-awareness is impossible to program...

    Neuroscientists Unlock Shared Brain Codes Between People
    http://www.sciencedaily.com/releases/2011/10/111020122311.htm


    https://www.cmu.edu/news/ archive/2010/January/jan13_braincodesnouns.shtml

    "An Artificial Retina with the Capacity to Restore Normal Vision
    For the First Time, Researchers Decipher the Retina's Neural Code for Brain Communication to Create Novel, More Effective Prosthetic Retinal Device for Blindness"
    http://weill.cornell.edu/news/releases/wcmc/wcmc_2012/08_13_12.shtml

    What you perceive is all code, people just need to know how to code it and it will be the same.

    You don't perceive light, your optical apparatus does and it sends code to the brain to be interpreted, you don't feel heat your tactile cells do and they transmit code to the brain to be interpreted.

    We are approaching a time when we can actually see what that code is and can replicate it now.

    So I find it amusing that some still don't believe it is possible to make artificial feeling, just send the right code to the appropriate decoding mechanism at it will probably reach the same results as any other system, it doesn't matter if that system is biological or mechanical.

    We like to see ourselves more than just inputs and outputs, we romanticize the nature of existence because we don't fully understand what we are and we want to be regarded as special maybe because at some level we know that being classified as such makes us less vulnerable to others by creating a barrier that must be transposed by others, in a sense it may be a self defense mechanism against our own, but it doesn't change what we are or how we operate.

    link to this | view in thread ]

  50. icon
    nospacesorspecialcharacters (profile), 1 Dec 2012 @ 3:57am

    Believing something doesn't make it true

    There's a lot of faith-based arguments I'm seeing above. The idea being, give an AI a complex set of instructions (based on what the creator thinks is right and wrong) and the AI will somehow grow feelings out of making those decisions again and again.

    That is patently not how neural networks work. Neural networks learn a behaviour over and over again and get better at picking the correct path.

    Human feelings constantly override this process in humans. We know in our brain what is the right or wrong path, but sometimes well end up bypassing that decision process and going with our emotion (e.g. love).

    Sorry but it's impossible to code a machine with a set of instructions to then ignore those instructions, without more instructions that simply filter down to further iterations of code; which cancels out to a simple yes/no weighted decision.

    link to this | view in thread ]

  51. identicon
    Paul Keating, 1 Dec 2012 @ 10:02am

    RIAA Funding

    It's the Internet, duh..... Close up shop, send the reported conclusions back to the RIAA.

    link to this | view in thread ]

  52. identicon
    Anonymous Coward, 1 Dec 2012 @ 6:42pm

    "Nature didn't anticipate us...". Now WTF does that ish mean?

    link to this | view in thread ]

  53. identicon
    Anonymous Coward, 3 Dec 2012 @ 1:33am

    Imaginary scenario

    President: Computer. Citizens are protesting that we should reduce pollution and protect our environment. Do it.

    Computer: Searching the internet for suggestions... Done.
    The most effective way suggested is to make human extinct. Proceed to execution......

    (Live supporting systems of everyone's room shutdown)

    President: Argh...

    link to this | view in thread ]

  54. icon
    Toot Rue (profile), 4 Dec 2012 @ 10:03am

    Domain rules

    I would have to imagine that whether or not developing true AI is possible would have to do with whether or not it was allowed for in the programming of our universe.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.