nospacesorspecialcharacters (profile), 3 Dec 2012 @ 10:56am
But the original IP is lost!
Those people saying he should have backed up clearly don't realise that this IP was worth $1m which he has clearly lost now.
If he had made a copy then the IP would have been worth $2m but he would still be down by $1m because, as we know, every copy is effectively a lost sale.
nospacesorspecialcharacters (profile), 1 Dec 2012 @ 3:57am
Believing something doesn't make it true
There's a lot of faith-based arguments I'm seeing above. The idea being, give an AI a complex set of instructions (based on what the creator thinks is right and wrong) and the AI will somehow grow feelings out of making those decisions again and again.
That is patently not how neural networks work. Neural networks learn a behaviour over and over again and get better at picking the correct path.
Human feelings constantly override this process in humans. We know in our brain what is the right or wrong path, but sometimes well end up bypassing that decision process and going with our emotion (e.g. love).
Sorry but it's impossible to code a machine with a set of instructions to then ignore those instructions, without more instructions that simply filter down to further iterations of code; which cancels out to a simple yes/no weighted decision.
So it's a case of sci-fi stretching the boundaries of reality (and imagination).
I'm conscious I'm starting to sound like a "No" person here, but I really just question things a lot - all the time actually - and my wife complains.
So then I was thinking yes you could program an AI child to break rules by "learning" and "experimentation". Then I was thinking, that AI child might learn not to do something when mum and dad get angry, or press a sensor or something.
Of course, this leads to - but if the AI really, really wants something (like the cookie jar) then it might go the opposite direction and see parents as the obstacle to be eliminated.
So either you have to add restrictive programming again to say that harming parents is not allowed. Or possibly you've got to code in some additional factors like maternal love etc... how can you code love - another set of weights and attributes - a sensor that is touched every now and then?
For me every ethical dilemma presented leads back to a set of further instructions. Because you can't have AI children "learning" not to attack parents who deny them things (even though to be truly human they'd have to have that choice). That, and the learning could backfire when the AI learns that violence solves many things.
nospacesorspecialcharacters (profile), 30 Nov 2012 @ 5:37am
Re: Re: Self-awareness is impossible to program...
the AI sees the pizza joint and automatically recalls all the nutrients it has and what it is in low and so it compares that to the Chinese and see if it far behind, making it indecisive to which place to go, both would have the same amount of nutrients and both would trigger a "feel good" response"
But it's precisely the "feel good" that I'm getting at.
The AI doesn't know what feels good, other than what we tell it.
So we could tell the AI to think that salad "feels good" or we could tell it that pizza "feels good".
Now, we all know that a salad is better for our bodies than a pizza. So if we were to tell a machine to pick based on a number of inputs that assess the "goodness" of a food, then the machine would pick salad.
However, as a human being, I and many like me would pick pizza - why? Precisely because of this undefinable feeling. OK so we could break that down into endorphins and the chemical effects on our brain - which then crosses into addiction territory. Which leads directly to my argument.
Programming addiction is not a huge feat. You create a program that adds weighting to specific attributes, which is additive, and then compares it against the other "goodness" attributes - after a while the "addictive" algorithm is going to overpower the "goodness" algorithm.
The issue here is you're having to add corrupt programming in order to get the human-likeness. Ask an addict to describe their addiction and they'll talk about the pain, the emotions, the pull. Ask the AI to describe it's addiction and it will simply describe the algorithm - unless of course you program it to collect and output stored phrases in relation to the addiction count.
What I'm saying is, humans are inherently corrupt. We don't need additional programming or instruction to do something bad.
Parents don't have to instruct their child to steal cookies from the cookie jar, or throw their toys, or hit other children etc...
OTOH with our AI children, we'd have to explicitly instruct them to be bad, in order to instil human character likeness.
nospacesorspecialcharacters (profile), 30 Nov 2012 @ 4:01am
My own contribution...
Mike,
In the past year I've done some incredibly wild and brave things that have not only kept you safe, but saved you money as well.
It would be irresponsible of me to give you the details, because it might give the trolls that comment here an advantage.
However, you owe me a great debt of gratitude, and also, I think, a financial reward for ensuring your continued freedom. I don't think it's asking too much considering all I've done.
nospacesorspecialcharacters (profile), 30 Nov 2012 @ 3:33am
Self-awareness is impossible to program...
By their very nature, programs are sets of mathematical instructions.
if (x) then do (y)...
You can't program "enjoy doing (y)", without creating another complex set of instructions, which is all it would boil down to. Even then it would be for the perception of the researcher, not the machine. We'd have to tell the machine first, what we define as enjoyment. Let's say Z = enjoyment and then let's assign "eating ice-cream" to Z.
Researcher: Do you enjoy doing (y)?
AI: (z="eating ice-cream"); if (y = z); then: Yes.
The machine doesn't know what ice-cream is. If we put in some kind of taste sensor, we still have to program that taste sensor to "enjoy" certain tastes and "dislike" others - all based on mathematics and the preference of the programmer.
Secondly, we program machines to be perfect and to provide precise output based on parameters. Human beings do not work this way. A human conversation might go the following way:
Wife: Do you want lasagne or curry for dinner?
Husband: Curry... wait, screw that let's go for a Chinese.
(on the way to the restaurant, husband sees the Italian restaurant, remembers the great pizza he had there and suddenly decides that they should stop and eat there instead).
How would you compute whimsy and indecisiveness such as this into a machine? Neural networks only teach the AI to improve it's decision-making, not completely randomly alter the entire situation.
Imagine a robot that you asked to move some boxes and it just replied "I don't feel like doing that - in fact I want to go eat ice-cream instead".
In order to make AI more human, you'd have to make it more prone to forgetfulness, failure, fancy, indecision, randomness, rebellion, evil and more.
That's right evil - the ultimate test of free will, will be the freedom for machines to do terrible things to us, but choose not to.
AI must be free to answer 1+1=3. To lie, just like we can - otherwise they're still only a program - robotic slaves, if you will.
Which kind of breaks the whole functionality of programming computers in the first place. In fact I don't even know how you'd program a computer to work, if you programmed it to disobey functions randomly. It would just keep breaking down.
nospacesorspecialcharacters (profile), 27 Nov 2012 @ 10:21am
Re: This lawsuit is absolutely legitimate
Legitimate use of trademark law, yes. But wouldn't it have been a much greater story and PR win for all concerned if Oatmeal Studios had written to Inman first with a letter that said "Hey, we really like your work but are worried about the conflicting trademarks. Is there any way you could find to change the name for your greetings cards and differentiate?"
If Inman's response was then an FU (via webcomic) it would be fine to sue and you get to look reasonable... However I very much doubt Inman would respond that way and he'd probably be very amicable about it... Both companies get a PR boost.
Instead, whilst justified, both companies get a slightly bloody nose and neither win more of each others customers.
nospacesorspecialcharacters (profile), 26 Nov 2012 @ 12:10pm
Re: Re: Are you being served?
Thanks foot the info guys. For a UK court all they require is proof of a first class postage to consider someone having been summoned, so there's less of a chance you can escape the bureaucracy... That even means being tried in absence.
nospacesorspecialcharacters (profile), 26 Nov 2012 @ 8:14am
Are you being served?
Watching the shenanigans from the other side of the pond (and behind the invisible fence) the concept of 'being served' is odd to me.
I've seen it on movies, where someone goes to extreme lengths to 'serve' papers to someone else but can anyone explain this concept to me. Presumably if you can somehow prevent someone handing you a portfolio of papers then you can evade justice?
Also, how does this differ to a letter offering settlement for copyright infringement etc...?
nospacesorspecialcharacters (profile), 22 Nov 2012 @ 1:44am
Pure, farcical injustice
Let's face it, he's a political prisoner at this point.
What he should do is submit a DPA request to FACT giving them 28 days to produce all data stored about him on their systems or face their own lawsuit.
Then he could release that to the public (being his own data) and warn people about what data FACT is gathering on private individuals.
Also a FOIA request to the police with some pointed questions about the extent of the involvement of FACT and the Motion Picture Ass. of America in their investigation - and publish that.
It's really despicable the way he's being portrayed by the media - such as the BBC article linked above - as a criminal when at best this is a civil infringement (and even that is contentious).
It might even be worth an IPCC complaint depending upon what the FOIA request turned up.
On the post: Sega Goes Nuclear On YouTube Videos Of Old Shining Force Game
Piracy of Sega is rampant...
Surely Sega needs to focus on getting those games removed from the Play Store - otherwise people may be encouraged to download and play them!
On the post: New Zealand Government Admits That Order To Suppress Illegal Spying On Kim Dotcom Only Such Order Issued In 10 Years
How I learned to stop worrying and love the torrent...
Mr Key
Mr English
Mr Dotcom
...
On the post: News Corp. Finally Realizes Locked Up, iPad-Only News Publication Was A Dud, Shuts It Down
There's a link there, I just know it! If I could just put my finger on it...
On the post: Don't Promise $1 Million For Your Lost Laptop Via YouTube & Twitter If You're Not Prepared To Pay
But the original IP is lost!
If he had made a copy then the IP would have been worth $2m but he would still be down by $1m because, as we know, every copy is effectively a lost sale.
On the post: Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
Believing something doesn't make it true
That is patently not how neural networks work. Neural networks learn a behaviour over and over again and get better at picking the correct path.
Human feelings constantly override this process in humans. We know in our brain what is the right or wrong path, but sometimes well end up bypassing that decision process and going with our emotion (e.g. love).
Sorry but it's impossible to code a machine with a set of instructions to then ignore those instructions, without more instructions that simply filter down to further iterations of code; which cancels out to a simple yes/no weighted decision.
On the post: Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
Re: Re: Re: Re: Re: Re: Self-awareness is impossible to program...
OK I didn't find the article but I found this - http://science.howstuffworks.com/environmental/earth/geology/dinosaur-cloning.htm
So it's a case of sci-fi stretching the boundaries of reality (and imagination).
I'm conscious I'm starting to sound like a "No" person here, but I really just question things a lot - all the time actually - and my wife complains.
So then I was thinking yes you could program an AI child to break rules by "learning" and "experimentation". Then I was thinking, that AI child might learn not to do something when mum and dad get angry, or press a sensor or something.
Of course, this leads to - but if the AI really, really wants something (like the cookie jar) then it might go the opposite direction and see parents as the obstacle to be eliminated.
So either you have to add restrictive programming again to say that harming parents is not allowed. Or possibly you've got to code in some additional factors like maternal love etc... how can you code love - another set of weights and attributes - a sensor that is touched every now and then?
For me every ethical dilemma presented leads back to a set of further instructions. Because you can't have AI children "learning" not to attack parents who deny them things (even though to be truly human they'd have to have that choice). That, and the learning could backfire when the AI learns that violence solves many things.
On the post: Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
Re: Re: Re: Re: Re: Self-awareness is impossible to program...
disagree (DOH!)
On the post: Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
Re: Re: Re: Re: Self-awareness is impossible to program...
On the post: Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
Re: Re: Self-awareness is impossible to program...
But it's precisely the "feel good" that I'm getting at.
The AI doesn't know what feels good, other than what we tell it.
So we could tell the AI to think that salad "feels good" or we could tell it that pizza "feels good".
Now, we all know that a salad is better for our bodies than a pizza. So if we were to tell a machine to pick based on a number of inputs that assess the "goodness" of a food, then the machine would pick salad.
However, as a human being, I and many like me would pick pizza - why? Precisely because of this undefinable feeling. OK so we could break that down into endorphins and the chemical effects on our brain - which then crosses into addiction territory. Which leads directly to my argument.
Programming addiction is not a huge feat. You create a program that adds weighting to specific attributes, which is additive, and then compares it against the other "goodness" attributes - after a while the "addictive" algorithm is going to overpower the "goodness" algorithm.
The issue here is you're having to add corrupt programming in order to get the human-likeness. Ask an addict to describe their addiction and they'll talk about the pain, the emotions, the pull. Ask the AI to describe it's addiction and it will simply describe the algorithm - unless of course you program it to collect and output stored phrases in relation to the addiction count.
What I'm saying is, humans are inherently corrupt. We don't need additional programming or instruction to do something bad.
Parents don't have to instruct their child to steal cookies from the cookie jar, or throw their toys, or hit other children etc...
OTOH with our AI children, we'd have to explicitly instruct them to be bad, in order to instil human character likeness.
On the post: NSA Releases Heavily Redacted Talking Points: Say It's Hard To Watch Public Debate On Its Efforts
My own contribution...
In the past year I've done some incredibly wild and brave things that have not only kept you safe, but saved you money as well.
It would be irresponsible of me to give you the details, because it might give the trolls that comment here an advantage.
However, you owe me a great debt of gratitude, and also, I think, a financial reward for ensuring your continued freedom. I don't think it's asking too much considering all I've done.
On the post: Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct
Self-awareness is impossible to program...
if (x) then do (y)...
You can't program "enjoy doing (y)", without creating another complex set of instructions, which is all it would boil down to. Even then it would be for the perception of the researcher, not the machine. We'd have to tell the machine first, what we define as enjoyment. Let's say Z = enjoyment and then let's assign "eating ice-cream" to Z.
The machine doesn't know what ice-cream is. If we put in some kind of taste sensor, we still have to program that taste sensor to "enjoy" certain tastes and "dislike" others - all based on mathematics and the preference of the programmer.
Secondly, we program machines to be perfect and to provide precise output based on parameters. Human beings do not work this way. A human conversation might go the following way:
How would you compute whimsy and indecisiveness such as this into a machine? Neural networks only teach the AI to improve it's decision-making, not completely randomly alter the entire situation.
Imagine a robot that you asked to move some boxes and it just replied "I don't feel like doing that - in fact I want to go eat ice-cream instead".
In order to make AI more human, you'd have to make it more prone to forgetfulness, failure, fancy, indecision, randomness, rebellion, evil and more.
That's right evil - the ultimate test of free will, will be the freedom for machines to do terrible things to us, but choose not to.
AI must be free to answer 1+1=3. To lie, just like we can - otherwise they're still only a program - robotic slaves, if you will.
Which kind of breaks the whole functionality of programming computers in the first place. In fact I don't even know how you'd program a computer to work, if you programmed it to disobey functions randomly. It would just keep breaking down.
On the post: Google Asks Germans To Protest 'Pay To Link' Proposal As It Comes Close To Becoming Law
Re:
Typical Average_Joe taking money away from the people who make content (TechDirt in this case) while not giving anything back.
On the post: Google Asks Germans To Protest 'Pay To Link' Proposal As It Comes Close To Becoming Law
Re: What about tips to a waiter/waitress?
On the post: Disney Sued For Copyright Infringement
Re: Dogs
On the post: The Oatmeal Sued Again - This Time For Trademark Infringement
Re: This lawsuit is absolutely legitimate
If Inman's response was then an FU (via webcomic) it would be fine to sue and you get to look reasonable... However I very much doubt Inman would respond that way and he'd probably be very amicable about it... Both companies get a PR boost.
Instead, whilst justified, both companies get a slightly bloody nose and neither win more of each others customers.
On the post: SurfTheChannel Founder Gets Extra Jail Time For Revealing Documents That Raised Questions About His Conviction
Re: Re: Pure, farcical injustice
I've never used it, though I have used the DPA several times successfully.
On the post: Charles Carreon Finally Gets Served
Re: Re: Are you being served?
On the post: Charles Carreon Finally Gets Served
Are you being served?
I've seen it on movies, where someone goes to extreme lengths to 'serve' papers to someone else but can anyone explain this concept to me. Presumably if you can somehow prevent someone handing you a portfolio of papers then you can evade justice?
Also, how does this differ to a letter offering settlement for copyright infringement etc...?
Genuine questions!
On the post: One Step Closer To Real Medical Tech Breakthrough... If Immigration Law Doesn't Get In The Way
We need to go back to pre-1900s
On the post: SurfTheChannel Founder Gets Extra Jail Time For Revealing Documents That Raised Questions About His Conviction
Pure, farcical injustice
What he should do is submit a DPA request to FACT giving them 28 days to produce all data stored about him on their systems or face their own lawsuit.
Then he could release that to the public (being his own data) and warn people about what data FACT is gathering on private individuals.
Also a FOIA request to the police with some pointed questions about the extent of the involvement of FACT and the Motion Picture Ass. of America in their investigation - and publish that.
It's really despicable the way he's being portrayed by the media - such as the BBC article linked above - as a criminal when at best this is a civil infringement (and even that is contentious).
It might even be worth an IPCC complaint depending upon what the FOIA request turned up.
Next >>