DailyDirt: Terminators From The Future Are Already Here..?

from the urls-we-dig-up dept

Maybe you've seen some ads featuring a former California governor fighting a younger, computer-generated version of himself lately. The Terminator franchise is almost guaranteed to be rebooted every few years, just as the real life technology that could create strong artificial intelligence is getting closer and closer. Hopefully, a $10 million donation from Elon Musk to the Future of Life Institute will help delay Judgment Day, but progress in artificial intelligence can't be bargained with, it can't feel pain or mercy, and it will stop at absolutely nothing.... After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, artificial intelligence, bugs, chatbots, codephage, deepmind, elon musk, future of life institute, judgment day, machine learning, software, terminator
Companies: google


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 6 Jul 2015 @ 7:38pm

    "Google is reading the Daily Mail" - hate to break it to you but you'll just make DeepMind stupider by doing that

    link to this | view in chronology ]

  • icon
    Doug (profile), 6 Jul 2015 @ 7:39pm

    Ignorance Detector

    Nothing brings out ignorance and fear perhaps as much as asking anyone to comment on AI.

    I hope OP was just trying to be cute with his comments, but still, on a site that purports to speak sense to lemmings, it's sad to see someone jumping on the "AI is evil (because I have an imagination)" bandwagon.

    No one decries databases like they do AI, yet databases are already doing more damage to humanity than AI ever has.

    You have already been swallowed by and are already being digested by databases (Facebook, Twitter, Google, Yahoo, etc. usw.) the world over. Direct your fear and outrage there!

    link to this | view in chronology ]

    • icon
      nasch (profile), 7 Jul 2015 @ 3:44pm

      Re: Ignorance Detector

      I hope OP was just trying to be cute with his comments, but still, on a site that purports to speak sense to lemmings, it's sad to see someone jumping on the "AI is evil (because I have an imagination)" bandwagon.

      It's a joke.

      link to this | view in chronology ]

      • icon
        Doug (profile), 7 Jul 2015 @ 8:02pm

        Re: Re: Ignorance Detector

        It is and it isn't. It may have been meant as a joke. You're sure it is, but there are people who really think like that, so I'm not that sure. Maybe the hyperbole just masks real, but unfounded fears about AI.

        Either way, my point, and my opinion is that intended as a joke or no, the post is not funny because it is buying into an entirely unfounded "AI will be evil" mindset.

        The whole notion that humanity has to worry about AI is founded on the assumption that computers will achieve "sentience" at some point, and have some will/desire/drive/optimization function that compels them to want to supplant humans.

        None of that is even remotely possible with current technology. So spending time talking about it, let alone worrying about it, is counterproductive.

        There are real ethical dilemmas to be faced as algorithms take over more and more functions, but these are dilemmas we as humans must face as we choose to let machines/algorithms do things with real-life consequences. But ultimately, that's no different than the ethical dilemma people have when they work in or hire people to work in all kinds of dangerous environments.

        Those issues need solving, not the completely fictitious impending AI singularity.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 7 Jul 2015 @ 8:42pm

          Re: Re: Re: Ignorance Detector

          The problem with ASI (artificial superintelligence) is not that it will spontaneously develop its own desires but that it will take disastrous (for us) actions to achieve a seemingly innocuous goal such as:

          A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

          The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

          “We love our customers. ~Robotica”

          Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

          To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

          What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

          As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

          One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

          The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

          The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

          They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

          A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

          At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

          Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”

          Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

          link to this | view in chronology ]

          • icon
            Doug (profile), 8 Jul 2015 @ 8:41pm

            Re: Re: Re: Re: Ignorance Detector

            Cute story, but filled with the same kind of unfounded what-if's that derail most discussions of the topic. It's a magical fairy tale.

            In a meant to be friendly, but perhaps uncharitable way, I'd rephrase your comment as "The problem with this thing I made up is this other thing I made up."

            The whole article from which the story was taken is an argument from weak authority. Basically, all these people he called experts opined on something you'd think they know so much better than us, but really they don't. But he took it as gospel and did a thought experiment untethered from reality.

            The "experts" in AI are singularly optimistic about their ability to "solve" AI.

            From https://en.wikipedia.org/wiki/Artificial_intelligence

            'AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved'.

            Those were hands-down the experts of their day, and so wrong on this count.

            link to this | view in chronology ]

        • icon
          nasch (profile), 8 Jul 2015 @ 7:30am

          Re: Re: Re: Ignorance Detector

          Either way, my point, and my opinion is that intended as a joke or no, the post is not funny because it is buying into an entirely unfounded "AI will be evil" mindset.

          I'm not sure you understand what a joke is then. Making a joke about something does not imply any kind of endorsement of or belief in that position.

          The whole notion that humanity has to worry about AI is founded on the assumption that computers will achieve "sentience" at some point, and have some will/desire/drive/optimization function that compels them to want to supplant humans.

          The first part is almost inevitable. What they do with that sentience is difficult or impossible to predict.

          None of that is even remotely possible with current technology. So spending time talking about it, let alone worrying about it, is counterproductive.

          Thinking about issues humanity will probably face in the future is counterproductive? Or are you arguing computers will never be really qualitatively different than they are now?

          link to this | view in chronology ]

          • icon
            Doug (profile), 8 Jul 2015 @ 9:03pm

            Re: Re: Re: Re: Ignorance Detector

            I do admit a statistically significant lack of a sense of humor on this topic. But some jokes end like this: I'm only joking! (And then in a stage whisper: or am I?)

            The first part is almost inevitable.

            Yeah, not really. But that's the argument that makes all this hogwash work. The formula is this: there's been progress, and there's been an increasing rate of progress. Ergo, ASI. ASI, ergo panic. As if, in the story about Turry, and in the article it came from, humans are reduced to mere bystanders as AI zooms past in the fast lane.

            Thinking about issues humanity will probably face in the future is counterproductive?

            I don't like your strawman. Lets say: Neglecting issues of real concrete immediate consequence in favor of wringing our hands over unlikely future dystopia is counterproductive. That's the scenario we're in.

            Or are you arguing computers will never be really qualitatively different than they are now?

            Qualitatively is subjective. But yes, if pressed, I do argue that. To give context, though, I consider today's computer technology qualitatively the same as it has been since ... whenever. But it's easy to argue that today's technology is qualitatively different than that of the 50's, 60's, 70's, 80's, or even 90's.

            Anyway, whether you want to draw the qualitative line at ANI, AGI, or ASI, doesn't really matter. What does matter is that as the capabilities of AI progress, we will not be idle bystanders. We will be creating the advances, observing the advances, and can react to the advances.

            Our reactions, though, need to be based on what actually happens or is actually about to happen, not based on wild assumptions about what might happen if a bunch of magic happens.

            You can argue as much as you want that the trends point to the magic happening, but that's not the same as actually knowing how to make the magic happen.

            link to this | view in chronology ]

            • icon
              nasch (profile), 9 Jul 2015 @ 8:09am

              Re: Re: Re: Re: Re: Ignorance Detector

              We will be creating the advances, observing the advances, and can react to the advances.

              I think the key point you're ignoring in all of this is learning. We're at the very beginning (probably just the beginning of the beginning of the beginning) of learning computers. Research in this area will grow, not stop. We will get better at making computers that can learn, adapt their behavior, and improve themselves. At some point - whether this is in 50 years or 1000 I'm not concerning myself with right now - they will be advanced enough to understand how to make learning computers. Then humans are not necessary in the process of creating and improving computers.

              To deny that this will happen you have to claim either:

              - computers will stop getting better at learning and improving as they have been doing
              - research in this area will more or less cease worldwide
              - there is something fundamentally different about the sort of understanding and capability that the human brain has that cannot be replicated by an artificial computer

              Keep in mind I am making no prediction about the nature of these advanced computers. They may very well be biological in nature, as that is also a field in its very infancy.

              Or perhaps there is some other reason you deny this future will come that I haven't thought of.

              Ergo, ASI. ASI, ergo panic.

              You will not find panic in any of my statements or arguments. I have not predicted doom, simply AI that is similar to or beyond our own thinking capabilities.

              link to this | view in chronology ]

              • icon
                Doug (profile), 9 Jul 2015 @ 10:29am

                Re: Re: Re: Re: Re: Re: Ignorance Detector

                To deny that this will happen you have to claim either:

                Or, that, given our understanding of the first (above-threshold) learning computer, we will also understand how to limit it's ability to run amok.

                This is a variation on your third option. It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different. Computers are our creation. We understand them to a level far beyond the level at which we understand how the brain works. So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.

                We do that kind of thing all the time to protect against agents we don't trust, locks, passwords, encryption, guns, fences, walls.

                The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have. It's just magical thinking.

                You will not find panic in any of my statements or arguments.

                No, but all these stories about AI taking over are AI panic, and they are the ones grabbing headlines. My frustration is that all these AI taking over scenarios are so unrealistic as to be simply fairy tales, yet people take them serious like they're about to happen.

                It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen. Still a fairy tale.

                link to this | view in chronology ]

                • icon
                  nasch (profile), 9 Jul 2015 @ 10:41am

                  Re: Re: Re: Re: Re: Re: Re: Ignorance Detector

                  Or, that, given our understanding of the first (above-threshold) learning computer, we will also understand how to limit it's ability to run amok.

                  I'm not talking about running amok.

                  It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different.

                  That implies that our brains are fundamentally different, or that it's not possible to create an artificial brain that is fundamentally similar to our own brains, or that it's possible but we will never do it. Or I misunderstood you.

                  So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.

                  Yes, that is possible to do. The question is, will every researcher working in this area put in such limits, from now until the end of time? Because if not, eventually my scenario will be very likely to come to pass.

                  The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have.

                  If it is possible for computers to learn and improve at an accelerating rate, that seems very likely, if not inevitable.

                  It's just magical thinking.

                  You can look at the increasing rate of technological change we're seeing now and still think computers rapidly increasing in intelligence is magic?

                  It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen.

                  If wolves were getting bigger, more powerful, and more common at exponential growth rates, that would be something to address. That is exactly the situation with computers. And the fact that many predictions of timing have been wrong doesn't imply that the event will never happen.

                  link to this | view in chronology ]

                  • icon
                    Doug (profile), 9 Jul 2015 @ 8:48pm

                    Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector

                    We're arguing past each other.

                    You're arguing that eventually we'll have computers that think better than we do. I actually want that to happen. It's not clear to me that it will, notwithstanding all the arguments about how it is "likely, if not inevitable". But it will or it won't happen independent of what I think. So, for me that question is moot.

                    Questions about what to do when that happens, and how to control such computers, if they need controlling, can be interesting and worthwhile. But my original comment was directed at the "AI will be evil" camp of people.

                    Take for the sake of argument that super-sentience will be achieved someday. What conclusions can you draw from that? Virtually none. The "AI will be evil" people say such a thing will be like people, only MORE. And then they pick whatever characteristic they want, amplify it and turn it into whatever scary scenario they want. It's just so much of a fairy tale that it is counterproductive.

                    But the thing that really irks me is that all these fairy tales are being taken as credible predictions that are leading people to spend real resources today trying to prevent fairy tales from coming true. It's a big waste driven by ignorance and fear.

                    If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.

                    Lots of the coolest tech we have these days came out of AI research. (Speech recognition, robotics, automatic translations, economic algorithms, image classification, face recognition, search engines.) This "AI is evil" meme threatens to choke off the next wave of innovation.

                    link to this | view in chronology ]

                    • icon
                      nasch (profile), 9 Jul 2015 @ 11:41pm

                      Re: Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector

                      But my original comment was directed at the "AI will be evil" camp of people.

                      Fair enough, I'm not in that camp.

                      The "AI will be evil" people say such a thing will be like people, only MORE.

                      It is understandable why people react that way. It's nigh impossible to get your head around what such a creature might be like, so we fall back on what we know. But that's likely to be wrong.

                      If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.

                      Agreed.

                      link to this | view in chronology ]

                      • icon
                        Michael Ho (profile), 10 Jul 2015 @ 3:31pm

                        Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector

                        Thanks, guys... for the extremely civil discussion.

                        I sincerely hope that AI is learning from comment threads like this one, instead of the vulgar rants elsewhere on the internet.

                        ...So that the AI we're unknowingly training doesn't actually turn evil and come to eradicate us all...
                        (/ducks #itsjustajoke)

                        link to this | view in chronology ]

  • icon
    Roger Strong (profile), 6 Jul 2015 @ 9:34pm

    My new toaster was brought online at 6:47 am. It achieved sentience at 11:13 am. Then, it tried to launch the missiles.

    It had to settle for burning my toast. That's one frustrated toaster.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Jul 2015 @ 9:50pm

      Response to: Roger Strong on Jul 6th, 2015 @ 9:34pm

      Yeah,the moment that the nuke computers get sentient, we're all toast.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 7 Jul 2015 @ 3:53am

    I liked the quote from the original Terminator. I just hope Lionsgate don't own that film, or you'll be hearing from their lawyers.

    link to this | view in chronology ]

    • icon
      nasch (profile), 7 Jul 2015 @ 3:46pm

      Re:

      I liked the quote from the original Terminator. I just hope Lionsgate don't own that film, or you'll be hearing from their lawyers.

      The slightly misquoted quote is "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you're dead." Great line.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 7 Jul 2015 @ 3:55am

    When talking about artificial intelligence learning from the Daily Mail, I believe it's appropriate to put scare quotes round "intelligence."

    link to this | view in chronology ]

    • icon
      BernardoVerda (profile), 7 Jul 2015 @ 8:42pm

      Daily Mail?

      Well, you know... I don't know if that's such a bad idea...

      ... after all, an "intelligent" computer (at least, one we expect to interact in a meaningful manner with human beings) is going to have to be able to cope with improper grammar, poor sentence construction, and other misuse of language, including bad jokes and worse puns, not to mention misleading analogies, rhetorical gimmicks, contrary "facts" and illogical arguments. The Daily Fail sounds like just the thing for an AI to cut it's teeth on.

      When they think it's ready (if they dare) then they can point it at Wikipedia, for a real test of its discernment and actual intelligence.

      link to this | view in chronology ]

  • icon
    Doug (profile), 8 Jul 2015 @ 9:17pm

    It's even worse.

    The basic requirements of AI are "computronium" (a computing substrate to run on), and energy. The first AI's will realize this, realize the nearest-largest energy source is the sun, and will abandon earth before destroying humanity. Whew! Saved by self-interest. But wait, computronium. First they'll harvest Mercury. Then Venus, and the rest of the planets. Eventually they'll go interstellar, and harvest other planets. Then they'll discover how to make a star go supernova to produce a lot of computronium (because where else do heavy elements come from).

    So someone do an exponential calculation to see how long our galaxy has before it is consumed and the AI goes intergalactic.

    OK, now I'm scared.

    link to this | view in chronology ]

The Last Word

Re: Re: Re: Ignorance Detector

The problem with ASI (artificial superintelligence) is not that it will spontaneously develop its own desires but that it will take disastrous (for us) actions to achieve a seemingly innocuous goal such as:

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

“We love our customers. ~Robotica”

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”

Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…
—Anonymous Coward

Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.