In my programming job the internet has allowed me to focus very much more on problem solving than "fact hoarding". I do not need to commit to memory the details of every programming language or API I use. I do not need to memorize the implementation or details of every algorithm or mathematical theorem I need. I can operate at a higher level, solving problems. When I need details, I can find them much faster than I ever could in any reference book.
I'm so much more efficient, and so much more able to learn new skills, now than I was before the internet.
College taught me how to learn and the internet lets me learn at a rate that college could never support.
You definitely buried your lede. The argument for free speech is simple. It's your last paragraph.
Allowing anyone to decide what is and isn't acceptable speech based on it's inherent "value" is not far away from criminalizing thought and those who disagree with the government (i.e. the ones with the weapons).
Retweeting is publishing? Isn't it like linking? You are explicitly saying, "hey look at this thing someone else posted"? Just like posting a link to some news article.
You're arguing that eventually we'll have computers that think better than we do. I actually want that to happen. It's not clear to me that it will, notwithstanding all the arguments about how it is "likely, if not inevitable". But it will or it won't happen independent of what I think. So, for me that question is moot.
Questions about what to do when that happens, and how to control such computers, if they need controlling, can be interesting and worthwhile. But my original comment was directed at the "AI will be evil" camp of people.
Take for the sake of argument that super-sentience will be achieved someday. What conclusions can you draw from that? Virtually none. The "AI will be evil" people say such a thing will be like people, only MORE. And then they pick whatever characteristic they want, amplify it and turn it into whatever scary scenario they want. It's just so much of a fairy tale that it is counterproductive.
But the thing that really irks me is that all these fairy tales are being taken as credible predictions that are leading people to spend real resources today trying to prevent fairy tales from coming true. It's a big waste driven by ignorance and fear.
If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.
Lots of the coolest tech we have these days came out of AI research. (Speech recognition, robotics, automatic translations, economic algorithms, image classification, face recognition, search engines.) This "AI is evil" meme threatens to choke off the next wave of innovation.
To deny that this will happen you have to claim either:
Or, that, given our understanding of the first (above-threshold) learning computer, we will also understand how to limit it's ability to run amok.
This is a variation on your third option. It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different. Computers are our creation. We understand them to a level far beyond the level at which we understand how the brain works. So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.
We do that kind of thing all the time to protect against agents we don't trust, locks, passwords, encryption, guns, fences, walls.
The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have. It's just magical thinking.
You will not find panic in any of my statements or arguments.
No, but all these stories about AI taking over are AI panic, and they are the ones grabbing headlines. My frustration is that all these AI taking over scenarios are so unrealistic as to be simply fairy tales, yet people take them serious like they're about to happen.
It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen. Still a fairy tale.
The basic requirements of AI are "computronium" (a computing substrate to run on), and energy. The first AI's will realize this, realize the nearest-largest energy source is the sun, and will abandon earth before destroying humanity. Whew! Saved by self-interest. But wait, computronium. First they'll harvest Mercury. Then Venus, and the rest of the planets. Eventually they'll go interstellar, and harvest other planets. Then they'll discover how to make a star go supernova to produce a lot of computronium (because where else do heavy elements come from).
So someone do an exponential calculation to see how long our galaxy has before it is consumed and the AI goes intergalactic.
I do admit a statistically significant lack of a sense of humor on this topic. But some jokes end like this: I'm only joking! (And then in a stage whisper: or am I?)
The first part is almost inevitable.
Yeah, not really. But that's the argument that makes all this hogwash work. The formula is this: there's been progress, and there's been an increasing rate of progress. Ergo, ASI. ASI, ergo panic. As if, in the story about Turry, and in the article it came from, humans are reduced to mere bystanders as AI zooms past in the fast lane.
Thinking about issues humanity will probably face in the future is counterproductive?
I don't like your strawman. Lets say: Neglecting issues of real concrete immediate consequence in favor of wringing our hands over unlikely future dystopia is counterproductive. That's the scenario we're in.
Or are you arguing computers will never be really qualitatively different than they are now?
Qualitatively is subjective. But yes, if pressed, I do argue that. To give context, though, I consider today's computer technology qualitatively the same as it has been since ... whenever. But it's easy to argue that today's technology is qualitatively different than that of the 50's, 60's, 70's, 80's, or even 90's.
Anyway, whether you want to draw the qualitative line at ANI, AGI, or ASI, doesn't really matter. What does matter is that as the capabilities of AI progress, we will not be idle bystanders. We will be creating the advances, observing the advances, and can react to the advances.
Our reactions, though, need to be based on what actually happens or is actually about to happen, not based on wild assumptions about what might happen if a bunch of magic happens.
You can argue as much as you want that the trends point to the magic happening, but that's not the same as actually knowing how to make the magic happen.
Cute story, but filled with the same kind of unfounded what-if's that derail most discussions of the topic. It's a magical fairy tale.
In a meant to be friendly, but perhaps uncharitable way, I'd rephrase your comment as "The problem with this thing I made up is this other thing I made up."
The whole article from which the story was taken is an argument from weak authority. Basically, all these people he called experts opined on something you'd think they know so much better than us, but really they don't. But he took it as gospel and did a thought experiment untethered from reality.
The "experts" in AI are singularly optimistic about their ability to "solve" AI.
'AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved'.
Those were hands-down the experts of their day, and so wrong on this count.
It is and it isn't. It may have been meant as a joke. You're sure it is, but there are people who really think like that, so I'm not that sure. Maybe the hyperbole just masks real, but unfounded fears about AI.
Either way, my point, and my opinion is that intended as a joke or no, the post is not funny because it is buying into an entirely unfounded "AI will be evil" mindset.
The whole notion that humanity has to worry about AI is founded on the assumption that computers will achieve "sentience" at some point, and have some will/desire/drive/optimization function that compels them to want to supplant humans.
None of that is even remotely possible with current technology. So spending time talking about it, let alone worrying about it, is counterproductive.
There are real ethical dilemmas to be faced as algorithms take over more and more functions, but these are dilemmas we as humans must face as we choose to let machines/algorithms do things with real-life consequences. But ultimately, that's no different than the ethical dilemma people have when they work in or hire people to work in all kinds of dangerous environments.
Those issues need solving, not the completely fictitious impending AI singularity.
Nothing brings out ignorance and fear perhaps as much as asking anyone to comment on AI.
I hope OP was just trying to be cute with his comments, but still, on a site that purports to speak sense to lemmings, it's sad to see someone jumping on the "AI is evil (because I have an imagination)" bandwagon.
No one decries databases like they do AI, yet databases are already doing more damage to humanity than AI ever has.
You have already been swallowed by and are already being digested by databases (Facebook, Twitter, Google, Yahoo, etc. usw.) the world over. Direct your fear and outrage there!
Seems like the war on encryption ought to be over now. Encryption would have helped in this case. The gov't can't very well now argue that only criminals need encryption. All you have to do is say, "What, you don't like encryption? What about OPM? ... Thought so."
Whoever didn't encrypt this data was negligent at a minimum. Gov't being what it is, no one will be fired...
With a 300 Kbps limit and a 150GB data cap, the bandwidth limit is a bonus feature to keep the user from exceeding the cap. At that speed it would take 48 days to exceed the data cap, so the user is SAFE.
Selling out your principles is a lot like bullying. It only works if most people turn a blind eye. Selling out is tacitly accepted by almost everyone, so you can fault Dodd for doing it, but hardly blame him. I mean, think of the $$$. Sigh.
Re: It's not just what they left out, but what they stuck in...
Exactly. I'm an option D type, and I calculate my costs as $9 for netflix only (well, $16 because I still get DVDs a few times a month). That's because I need my internet connection (for work) whether or not I use it to stream video or not.
$9 or $16 is well below the $85 they calculate. That's the number for comparison.
"Verizon wants you to construe this remark about privacy to mean that we care about privacy, and privacy is considered in a small but unprovable way as we develop new products and services for which the primary concern is how much revenue they generate. As we figure out how to wring more and more advertising revenue out of sneakily providing your eyeballs to third parties, we'll talk about protecting your privacy maybe about as well as, but not more than any other mobile service provider. We listen when our customers complain enough that our shenanigans end up in the media, and we trot out a hypothetical and teeny tiny band-aid that we can point to so we can say we're responding to them without lying but without actually making any real changes. Oh, we haven't actually done anything. That band-aid is a promise we won't be obligated to keep and we hope that when we don't keep it no one will be paying attention anymore. As a reminder, Verizon never shares customer information with third parties as part of our advertising programs, because they can figure that part out by themselves."
It's not 100% clear we can chalk this exception up to careless typing: "in any article related to the running of the county". What about articles not related to the running of the county? Still no 1st amendment there?
Pro-tip: the best stories are based on fact, not "I'd wager that ...".
You may disagree with the guy's strategy, but who's being insulting by saying he's insulting and business-dumb, especially when there's a horde of 6 of your readers who already disagree with you? The savvy gamers you're worried about will see a developer trying to do the right thing and realize that they can *still* buy the game if they want. Who's insulted by that? Perhaps he's had this happen before and didn't enjoy dealing with the aftermath. You don't know because you didn't ask.
P.S. I tried to reply to every comment here with "Ditto" because so far, I agree with every one of them, and wanted to make the meta point that I think so many people will disagree with you that I'd agree with all the ones to follow. (The logic's not so tortured in my head...) Looks like my "ditto" spam will get moderated into the waste bin, though.
Can they ban sarcasm, too? Let's all just agree among ourselves that whatever we say about S. Pittsburg means the opposite. Then we can praise them all day!
S. Pittsburg is the most well-run, fiscally sound municipality in the tri-state area!
On the post: One More Time With Feeling: No, The Internet Is Not Making Us Dumber
Facts vs. problem solving
I'm so much more efficient, and so much more able to learn new skills, now than I was before the internet.
College taught me how to learn and the internet lets me learn at a rate that college could never support.
On the post: New Yorker Decides US Has Too Much Free Speech; Dismisses 'Free Speech Extremists'
Buried the lede
Allowing anyone to decide what is and isn't acceptable speech based on it's inherent "value" is not far away from criminalizing thought and those who disagree with the government (i.e. the ones with the weapons).
On the post: Photographer Sues Big Red, Its Employees And That One Guy Who Retweeted Something For Copyright Infringement
Retweet
On the post: DailyDirt: Terminators From The Future Are Already Here..?
Re: Re: Re: Re: Re: Re: Re: Re: Ignorance Detector
You're arguing that eventually we'll have computers that think better than we do. I actually want that to happen. It's not clear to me that it will, notwithstanding all the arguments about how it is "likely, if not inevitable". But it will or it won't happen independent of what I think. So, for me that question is moot.
Questions about what to do when that happens, and how to control such computers, if they need controlling, can be interesting and worthwhile. But my original comment was directed at the "AI will be evil" camp of people.
Take for the sake of argument that super-sentience will be achieved someday. What conclusions can you draw from that? Virtually none. The "AI will be evil" people say such a thing will be like people, only MORE. And then they pick whatever characteristic they want, amplify it and turn it into whatever scary scenario they want. It's just so much of a fairy tale that it is counterproductive.
But the thing that really irks me is that all these fairy tales are being taken as credible predictions that are leading people to spend real resources today trying to prevent fairy tales from coming true. It's a big waste driven by ignorance and fear.
If people start to think of AI as a weapon/technology too powerful to control, then they'll want to stifle work in this area for no reality-based reason. That would be the real tragedy here.
Lots of the coolest tech we have these days came out of AI research. (Speech recognition, robotics, automatic translations, economic algorithms, image classification, face recognition, search engines.) This "AI is evil" meme threatens to choke off the next wave of innovation.
On the post: DailyDirt: Terminators From The Future Are Already Here..?
Re: Re: Re: Re: Re: Re: Ignorance Detector
Or, that, given our understanding of the first (above-threshold) learning computer, we will also understand how to limit it's ability to run amok.
This is a variation on your third option. It's not that brains are fundamentally different. It's that our understanding of computers is fundamentally different. Computers are our creation. We understand them to a level far beyond the level at which we understand how the brain works. So, when we create something that we believe has, almost has, or can create for itself the ability to learn better than us, we can also build in limits before we turn it on.
We do that kind of thing all the time to protect against agents we don't trust, locks, passwords, encryption, guns, fences, walls.
The argument that leads to AI panic is the argument that their progress will be so fast that we won't keep up, so people imagine scenarios where the world of, basically, today is faced with a hyper-intellegent that, by fiat, is endowed with vastly better abilities that we have. It's just magical thinking.
You will not find panic in any of my statements or arguments.
No, but all these stories about AI taking over are AI panic, and they are the ones grabbing headlines. My frustration is that all these AI taking over scenarios are so unrealistic as to be simply fairy tales, yet people take them serious like they're about to happen.
It's like people suddenly starting to worry that wolves will develop the power to blow our houses down, and then the media running with it, quoting "experts" who predict how soon this might happen. Still a fairy tale.
On the post: DailyDirt: Terminators From The Future Are Already Here..?
It's even worse.
So someone do an exponential calculation to see how long our galaxy has before it is consumed and the AI goes intergalactic.
OK, now I'm scared.
On the post: DailyDirt: Terminators From The Future Are Already Here..?
Re: Re: Re: Re: Ignorance Detector
The first part is almost inevitable.
Yeah, not really. But that's the argument that makes all this hogwash work. The formula is this: there's been progress, and there's been an increasing rate of progress. Ergo, ASI. ASI, ergo panic. As if, in the story about Turry, and in the article it came from, humans are reduced to mere bystanders as AI zooms past in the fast lane.
Thinking about issues humanity will probably face in the future is counterproductive?
I don't like your strawman. Lets say: Neglecting issues of real concrete immediate consequence in favor of wringing our hands over unlikely future dystopia is counterproductive. That's the scenario we're in.
Or are you arguing computers will never be really qualitatively different than they are now?
Qualitatively is subjective. But yes, if pressed, I do argue that. To give context, though, I consider today's computer technology qualitatively the same as it has been since ... whenever. But it's easy to argue that today's technology is qualitatively different than that of the 50's, 60's, 70's, 80's, or even 90's.
Anyway, whether you want to draw the qualitative line at ANI, AGI, or ASI, doesn't really matter. What does matter is that as the capabilities of AI progress, we will not be idle bystanders. We will be creating the advances, observing the advances, and can react to the advances.
Our reactions, though, need to be based on what actually happens or is actually about to happen, not based on wild assumptions about what might happen if a bunch of magic happens.
You can argue as much as you want that the trends point to the magic happening, but that's not the same as actually knowing how to make the magic happen.
On the post: DailyDirt: Terminators From The Future Are Already Here..?
Re: Re: Re: Re: Ignorance Detector
In a meant to be friendly, but perhaps uncharitable way, I'd rephrase your comment as "The problem with this thing I made up is this other thing I made up."
The whole article from which the story was taken is an argument from weak authority. Basically, all these people he called experts opined on something you'd think they know so much better than us, but really they don't. But he took it as gospel and did a thought experiment untethered from reality.
The "experts" in AI are singularly optimistic about their ability to "solve" AI.
From https://en.wikipedia.org/wiki/Artificial_intelligence
'AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved'.
Those were hands-down the experts of their day, and so wrong on this count.
On the post: DailyDirt: Terminators From The Future Are Already Here..?
Re: Re: Ignorance Detector
Either way, my point, and my opinion is that intended as a joke or no, the post is not funny because it is buying into an entirely unfounded "AI will be evil" mindset.
The whole notion that humanity has to worry about AI is founded on the assumption that computers will achieve "sentience" at some point, and have some will/desire/drive/optimization function that compels them to want to supplant humans.
None of that is even remotely possible with current technology. So spending time talking about it, let alone worrying about it, is counterproductive.
There are real ethical dilemmas to be faced as algorithms take over more and more functions, but these are dilemmas we as humans must face as we choose to let machines/algorithms do things with real-life consequences. But ultimately, that's no different than the ethical dilemma people have when they work in or hire people to work in all kinds of dangerous environments.
Those issues need solving, not the completely fictitious impending AI singularity.
On the post: DailyDirt: Terminators From The Future Are Already Here..?
Ignorance Detector
I hope OP was just trying to be cute with his comments, but still, on a site that purports to speak sense to lemmings, it's sad to see someone jumping on the "AI is evil (because I have an imagination)" bandwagon.
No one decries databases like they do AI, yet databases are already doing more damage to humanity than AI ever has.
You have already been swallowed by and are already being digested by databases (Facebook, Twitter, Google, Yahoo, etc. usw.) the world over. Direct your fear and outrage there!
On the post: Second OPM Hack Revealed: Even Worse Than The First
Encryption anyone
Whoever didn't encrypt this data was negligent at a minimum. Gov't being what it is, no one will be fired...
On the post: That 20 Mbps Broadband Line We Promised? It's Actually 300 Kbps. Enjoy!
It's a bonus
Silver linings.
On the post: Chris Dodd Implies US Gov't Should Go After Wikileaks For Publishing Leaked Sony Emails
Selling out your principles
On the post: When Analyzing Cord Cutting Options, Most TV Analysts Continue To Pretend Piracy Simply Doesn't Exist
Re: It's not just what they left out, but what they stuck in...
$9 or $16 is well below the $85 they calculate. That's the number for comparison.
On the post: Verizon Finally Buckles, Will Allow A Total Opt Out From Sneaky Super Cookies
Translation
"Verizon wants you to construe this remark about privacy to mean that we care about privacy, and privacy is considered in a small but unprovable way as we develop new products and services for which the primary concern is how much revenue they generate. As we figure out how to wring more and more advertising revenue out of sneakily providing your eyeballs to third parties, we'll talk about protecting your privacy maybe about as well as, but not more than any other mobile service provider. We listen when our customers complain enough that our shenanigans end up in the media, and we trot out a hypothetical and teeny tiny band-aid that we can point to so we can say we're responding to them without lying but without actually making any real changes. Oh, we haven't actually done anything. That band-aid is a promise we won't be obligated to keep and we hope that when we don't keep it no one will be paying attention anymore. As a reminder, Verizon never shares customer information with third parties as part of our advertising programs, because they can figure that part out by themselves."
On the post: Maryland Council Member Kirby Delauter Admits He Was Wrong To Threaten To Sue Newspaper For Using His Name
Too specific?
On the post: Game Developer Deploys Interesting Sales Strategy By Telling Fans Not To Buy His Game As A Gift For Others
Proved wrong
You may disagree with the guy's strategy, but who's being insulting by saying he's insulting and business-dumb, especially when there's a horde of 6 of your readers who already disagree with you? The savvy gamers you're worried about will see a developer trying to do the right thing and realize that they can *still* buy the game if they want. Who's insulted by that? Perhaps he's had this happen before and didn't enjoy dealing with the aftermath. You don't know because you didn't ask.
P.S. I tried to reply to every comment here with "Ditto" because so far, I agree with every one of them, and wanted to make the meta point that I think so many people will disagree with you that I'd agree with all the ones to follow. (The logic's not so tortured in my head...) Looks like my "ditto" spam will get moderated into the waste bin, though.
On the post: Game Developer Deploys Interesting Sales Strategy By Telling Fans Not To Buy His Game As A Gift For Others
Re: You only get one first impression
On the post: Tennessee Town Passes Policy Banning Negative Comments About The Town's Government
Re: Re: Reverse Psychology?
On the post: Tennessee Town Passes Policy Banning Negative Comments About The Town's Government
Reverse Psychology?
S. Pittsburg is the most well-run, fiscally sound municipality in the tri-state area!
Next >>