DailyDirt: Terminators From The Future Are Already Here..?
from the urls-we-dig-up dept
Maybe you've seen some ads featuring a former California governor fighting a younger, computer-generated version of himself lately. The Terminator franchise is almost guaranteed to be rebooted every few years, just as the real life technology that could create strong artificial intelligence is getting closer and closer. Hopefully, a $10 million donation from Elon Musk to the Future of Life Institute will help delay Judgment Day, but progress in artificial intelligence can't be bargained with, it can't feel pain or mercy, and it will stop at absolutely nothing....- Google DeepMind is reading Daily Mail and CNN articles to learn how to understand grammar and the English language better. Forcing computers to read news articles all day is probably going to end up being the reason why AI hates humanity. [url]
- Chatbots are getting better and better at human-like conversation -- and it can be very creepy. One Google-sponsored chatbot was asked, "What is immoral?" And it answered, "The fact that you have a child." Yup. These machines aren't going to try to extinguish the human race at all. Nope. Nope. Nope. Put your fingers in your ears and sing your favorite song now. [url]
- There's a system called CodePhage that can detect software bugs and attempt to fix them without human intervention. And that's how Skynet evolves to remove human error from its programming.... [url]
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, artificial intelligence, bugs, chatbots, codephage, deepmind, elon musk, future of life institute, judgment day, machine learning, software, terminator
Companies: google
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Ignorance Detector
I hope OP was just trying to be cute with his comments, but still, on a site that purports to speak sense to lemmings, it's sad to see someone jumping on the "AI is evil (because I have an imagination)" bandwagon.
No one decries databases like they do AI, yet databases are already doing more damage to humanity than AI ever has.
You have already been swallowed by and are already being digested by databases (Facebook, Twitter, Google, Yahoo, etc. usw.) the world over. Direct your fear and outrage there!
[ link to this | view in chronology ]
Re: Ignorance Detector
It's a joke.
[ link to this | view in chronology ]
Re: Re: Ignorance Detector
Either way, my point, and my opinion is that intended as a joke or no, the post is not funny because it is buying into an entirely unfounded "AI will be evil" mindset.
The whole notion that humanity has to worry about AI is founded on the assumption that computers will achieve "sentience" at some point, and have some will/desire/drive/optimization function that compels them to want to supplant humans.
None of that is even remotely possible with current technology. So spending time talking about it, let alone worrying about it, is counterproductive.
There are real ethical dilemmas to be faced as algorithms take over more and more functions, but these are dilemmas we as humans must face as we choose to let machines/algorithms do things with real-life consequences. But ultimately, that's no different than the ethical dilemma people have when they work in or hire people to work in all kinds of dangerous environments.
Those issues need solving, not the completely fictitious impending AI singularity.
[ link to this | view in chronology ]
Re: Re: Re: Ignorance Detector
[ link to this | view in chronology ]
Re: Re: Re: Re: Ignorance Detector
In a meant to be friendly, but perhaps uncharitable way, I'd rephrase your comment as "The problem with this thing I made up is this other thing I made up."
The whole article from which the story was taken is an argument from weak authority. Basically, all these people he called experts opined on something you'd think they know so much better than us, but really they don't. But he took it as gospel and did a thought experiment untethered from reality.
The "experts" in AI are singularly optimistic about their ability to "solve" AI.
From https://en.wikipedia.org/wiki/Artificial_intelligence
'AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved'.
Those were hands-down the experts of their day, and so wrong on this count.
[ link to this | view in chronology ]
Re: Re: Re: Ignorance Detector
I'm not sure you understand what a joke is then. Making a joke about something does not imply any kind of endorsement of or belief in that position.
The whole notion that humanity has to worry about AI is founded on the assumption that computers will achieve "sentience" at some point, and have some will/desire/drive/optimization function that compels them to want to supplant humans.
The first part is almost inevitable. What they do with that sentience is difficult or impossible to predict.
None of that is even remotely possible with current technology. So spending time talking about it, let alone worrying about it, is counterproductive.
Thinking about issues humanity will probably face in the future is counterproductive? Or are you arguing computers will never be really qualitatively different than they are now?
[ link to this | view in chronology ]
Re: Re: Re: Re: Ignorance Detector
The first part is almost inevitable.
Yeah, not really. But that's the argument that makes all this hogwash work. The formula is this: there's been progress, and there's been an increasing rate of progress. Ergo, ASI. ASI, ergo panic. As if, in the story about Turry, and in the article it came from, humans are reduced to mere bystanders as AI zooms past in the fast lane.
Thinking about issues humanity will probably face in the future is counterproductive?
I don't like your strawman. Lets say: Neglecting issues of real concrete immediate consequence in favor of wringing our hands over unlikely future dystopia is counterproductive. That's the scenario we're in.
Or are you arguing computers will never be really qualitatively different than they are now?
Qualitatively is subjective. But yes, if pressed, I do argue that. To give context, though, I consider today's computer technology qualitatively the same as it has been since ... whenever. But it's easy to argue that today's technology is qualitatively different than that of the 50's, 60's, 70's, 80's, or even 90's.
Anyway, whether you want to draw the qualitative line at ANI, AGI, or ASI, doesn't really matter. What does matter is that as the capabilities of AI progress, we will not be idle bystanders. We will be creating the advances, observing the advances, and can react to the advances.
Our reactions, though, need to be based on what actually happens or is actually about to happen, not based on wild assumptions about what might happen if a bunch of magic happens.
You can argue as much as you want that the trends point to the magic happening, but that's not the same as actually knowing how to make the magic happen.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Ignorance Detector
I think the key point you're ignoring in all of this is learning. We're at the very beginning (probably just the beginning of the beginning of the beginning) of learning computers. Research in this area will grow, not stop. We will get better at making computers that can learn, adapt their behavior, and improve themselves. At some point - whether this is in 50 years or 1000 I'm not concerning myself with right now - they will be advanced enough to understand how to make learning computers. Then humans are not necessary in the process of creating and improving computers.
To deny that this will happen you have to claim either:
- computers will stop getting better at learning and improving as they have been doing
- research in this area will more or less cease worldwide
- there is something fundamentally different about the sort of understanding and capability that the human brain has that cannot be replicated by an artificial computer
Keep in mind I am making no prediction about the nature of these advanced computers. They may very well be biological in nature, as that is also a field in its very infancy.
Or perhaps there is some other reason you deny this future will come that I haven't thought of.
Ergo, ASI. ASI, ergo panic.
You will not find panic in any of my statements or arguments. I have not predicted doom, simply AI that is similar to or beyond our own thinking capabilities.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Ignorance Detector
Or, that, given our understanding of the first (above-threshold) learning computer, we will also understand how to limit it's ability to run amok.
This is a variation on your third option. It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different. Computers are our creation. We understand them to a level far beyond the level at which we understand how the brain works. So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.
We do that kind of thing all the time to protect against agents we don't trust, locks, passwords, encryption, guns, fences, walls.
The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have. It's just magical thinking.
You will not find panic in any of my statements or arguments.
No, but all these stories about AI taking over are AI panic, and they are the ones grabbing headlines. My frustration is that all these AI taking over scenarios are so unrealistic as to be simply fairy tales, yet people take them serious like they're about to happen.
It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen. Still a fairy tale.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Ignorance Detector
I'm not talking about running amok.
It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different.
That implies that our brains are fundamentally different, or that it's not possible to create an artificial brain that is fundamentally similar to our own brains, or that it's possible but we will never do it. Or I misunderstood you.
So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.
Yes, that is possible to do. The question is, will every researcher working in this area put in such limits, from now until the end of time? Because if not, eventually my scenario will be very likely to come to pass.
The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have.
If it is possible for computers to learn and improve at an accelerating rate, that seems very likely, if not inevitable.
It's just magical thinking.
You can look at the increasing rate of technological change we're seeing now and still think computers rapidly increasing in intelligence is magic?
It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen.
If wolves were getting bigger, more powerful, and more common at exponential growth rates, that would be something to address. That is exactly the situation with computers. And the fact that many predictions of timing have been wrong doesn't imply that the event will never happen.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector
You're arguing that eventually we'll have computers that think better than we do. I actually want that to happen. It's not clear to me that it will, notwithstanding all the arguments about how it is "likely, if not inevitable". But it will or it won't happen independent of what I think. So, for me that question is moot.
Questions about what to do when that happens, and how to control such computers, if they need controlling, can be interesting and worthwhile. But my original comment was directed at the "AI will be evil" camp of people.
Take for the sake of argument that super-sentience will be achieved someday. What conclusions can you draw from that? Virtually none. The "AI will be evil" people say such a thing will be like people, only MORE. And then they pick whatever characteristic they want, amplify it and turn it into whatever scary scenario they want. It's just so much of a fairy tale that it is counterproductive.
But the thing that really irks me is that all these fairy tales are being taken as credible predictions that are leading people to spend real resources today trying to prevent fairy tales from coming true. It's a big waste driven by ignorance and fear.
If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.
Lots of the coolest tech we have these days came out of AI research. (Speech recognition, robotics, automatic translations, economic algorithms, image classification, face recognition, search engines.) This "AI is evil" meme threatens to choke off the next wave of innovation.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector
Fair enough, I'm not in that camp.
The "AI will be evil" people say such a thing will be like people, only MORE.
It is understandable why people react that way. It's nigh impossible to get your head around what such a creature might be like, so we fall back on what we know. But that's likely to be wrong.
If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.
Agreed.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector
I sincerely hope that AI is learning from comment threads like this one, instead of the vulgar rants elsewhere on the internet.
...So that the AI we're unknowingly training doesn't actually turn evil and come to eradicate us all...
(/ducks #itsjustajoke)
[ link to this | view in chronology ]
It had to settle for burning my toast. That's one frustrated toaster.
[ link to this | view in chronology ]
Response to: Roger Strong on Jul 6th, 2015 @ 9:34pm
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
The slightly misquoted quote is "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you're dead." Great line.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Daily Mail?
... after all, an "intelligent" computer (at least, one we expect to interact in a meaningful manner with human beings) is going to have to be able to cope with improper grammar, poor sentence construction, and other misuse of language, including bad jokes and worse puns, not to mention misleading analogies, rhetorical gimmicks, contrary "facts" and illogical arguments. The Daily Fail sounds like just the thing for an AI to cut it's teeth on.
When they think it's ready (if they dare) then they can point it at Wikipedia, for a real test of its discernment and actual intelligence.
[ link to this | view in chronology ]
It's even worse.
So someone do an exponential calculation to see how long our galaxy has before it is consumed and the AI goes intergalactic.
OK, now I'm scared.
[ link to this | view in chronology ]
The Last Word
“Re: Re: Re: Ignorance Detector
The problem with ASI (artificial superintelligence) is not that it will spontaneously develop its own desires but that it will take disastrous (for us) actions to achieve a seemingly innocuous goal such as: