I'm not saying the "platform owners" should say that. I'm saying EVERYBODY should say it.
The problem with asking them to come up with suggestions is that they WILL. And they will claim that their suggestions are workable when they're actually not. And they'll claim that their suggestions don't force disabling security measures when they actually do. And they'll claim that their suggestions don't put people at risk when they actually do.
They will never come up with any suggestions that don't have those problems, because that is not possible. However, every time you manage to argue away one suggestion, they'll reword things a bit, come up with a slightly modified one, and claim this one is the fix. They can do this forever.
... and their message to people who are not closely engaged with the issue will be that they've tried and tried to be reasonable and address the sane people's concerns, but the sane people are unreasonable and hate compromise and won't accept anything at all.
It is incredibly bad strategy to adopt any message that suggests there's could be an acceptable way to do what those people want, because there is not.
This article and the linked thread seem to be centered on the kind of "knowledge" where you know "this piece of content over here in this file is child porn".
But there's another kind of knowledge, of the form "I run a social site with 100,000,000 users. It is a practical certainty that there's child porn going through my system.". It's not just "I'm ignoring a real possiblity". It's "I'm sure there's some here; I just don't know exactly where it is". Especially after the first few unrelated cases in which you find some.
That kind of thing really isn't captured by the normal lay idea of "recklessness". And if it falls within some legal definition of recklessness, then it's still at least an extremely strong form, way out near the boundary with actual knowledge... which is probably a boundary that can move given the right kind of bad-law-making case.
I think that the "EARN-IT" people are hoping to be able to go after the second kind of knowledge, and I'm afraid that Smith may not be protection enough.
A bookseller in 1958 who happened to have one "obscene" book could reasonably argue that they didn't know what was in it and also didn't know, or even have any reason to believe, that there was anything like that in their stock at all.
A large social site in 2022 knows there's some child porn in the mix somewhere. I suspect that the proponents are hoping that they can use that as enough scienter to get around Smith completely.
It's true that it's still just as impractical for a site to find every single bit of child porn as it would be for a bookseller to find every "obscene" book... but they can still push for the idea that the First Amendment allows them to require a site to do "everything reasonably possible". Not just because it's supposedly a "best practice". Not just because not doing it would risk not finding child porn. Because the site has actual knowledge that there's a problem on their particular system.
That means they can still try to demand scanning, whether via state law or via some other pass. Scanning, of course, means no effective encryption. They will try to get those in through the back door even if they're not in the bill, and given the subject matter I'd be really worried that they'd win in court.
The right answer, of course, is "Yeah, I'm sure there's some child porn on every major site. Tough". But nobody seems to have the guts to say that.
Typical interaction on Halloween: kids you don't know, from blocks away, knock on your door and you throw some candy into their bags. The most interaction might be "Who are you? Good job on the costume!". There are dozens of other kids parading by, they often travel in groups, and most of the time these days even the older kids have their parents with them.
Suppose you were the biggest child molester that ever child molested. How exactly would you turn that situation into a molestin'?
Who (the fuck) are these people and why is everybody talking about them all of a sudden?
From all the stuff that's been plastered all over everything I read, I have gleaned the information that they're about a 60-person company in Chicago, and that they had something to do with inflicting Ruby on Rails on the world.
Somehow I'm having trouble caring about them or anything they do...
If the UK government wants support for its anti-encryption efforts, it needs to do better than basically lying to people.
Why? Lying works in politics.
First you lie to yourself, and convince yourself that some single thing is The Most Important Thing. Then you come up with a bunch of Things to Do, and obviously they Must Be Done if they even might have any effect at all on The Most Important Thing. Even if none of them might have any effect, you still have to do them because Something Must Be Done.
And it doesn't matter how much damage you do elsewhere, because no other issue is The Most Important Thing.
Then you like to everybody else. You exaggerate, you make wild accusations, whatever. If you want to ban mayonnaise, you say that mayonnaise is radioactive. Which you can justify because after all you're dealing with The Most Important Thing here.
And, by the way, anybody who says anything that contradicts your lies, or even doesn't promote your view, is scum. It is Not OK to say that mayonnaise is not in fact radioactive. After all, true or not, the idea that mayonnaise is radioactive might actually convince somebody to ban it, and that's The Most Important Thing.
For these people, protecting children from any exposure to sexuality, especially in relation to adults, is The Most Important Thing. If those same children end up impoverished, oppressed, or dead, well, sorry, that's just not as Important.
While this QI bullshit in the US is clearly based on egregious judicial activism by the Supremes (and after that a lot of apparently intentional inactivism), let's not forget that Congress could eliminate it at any moment, has had over 50 years to do it, and hasn't done so.
And I'm not a lawyer, but I suspect that individual states could do at least something about it with respect to those officers who operate under their own authority. They haven't done it either.
It seems like there's plenty of blame to go around for this.
Basically everybody in any authority in government is terrified that the world will burn down if cops have to follow rules. Or they think their constituents are. So the dereliction of duty is pretty universal.
There are two issues here: integrity and confidentiality (aka privacy). These systems are not the answer for either one.
Integrity is best solved end-to-end using DNSSEC. It's absolutely stupid to try to do it using hop-by-hop cryptography; you're trusting every hop not to tamper with the data.
... and just encrypting DNS traffic doesn't solve confidentiality either. It doesn't even improve confidentiality in the large.
The adversary model is incoherent. If your ISP is spying on your DNS traffic, and you deny that to the ISP, then the ISP can just switch to watching where your actual data go. Yes, that may be slightly more costly for them, since otherwise they probably would have done it in the first place. It doesn't follow that the costs imposed on them are enough to justify the switch. In fact, they probably are not.
All the proposals encourage centralization, which means that when (not if) some resolver that a lot of people are trusting goes bad, the impact is huge. Instead of a relatively large number of relatively survivable events, you create a few massive catastrophes.
What this is fundamentally trying to be is an anonymity system (I guess a PIR system). Anonymity systems are HARD. Much, much harder than point to point cryptography. There are a million correlation and fault induction attacks, and in the case of DNS there are a million players in the protocol as well. There's been absolutely zero analysis of how easy or hard these methods may be to de-anonymize using readily observable data. They seem to be being designed by people who don't even understand the basics, and think they're helping when they charge ahead blindly.
... not to mention that it's just psychotic to tunnel a nice simple cacheable protocol like DNS over a horrific tower of hacks like HTTP.
... oh, and even if you weren't a company with any significant infrastructure, they could also come after you for providing software for P2P or other decentralized solutions. "Protocols, not platforms" only works if somebody's allowed to provide the software to speak the protocol...
Hmm. It actually may be worse than that, because it appears to apply beyond what you'd think of as "platforms".
The recklessness and "best practices" requirements are applied to all providers of "interactive computer services". The definition of "interactive computer service" is imported by reference from 230. That definition is:
The term "interactive computer service" means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
The part about "system... that enables computer access" sweeps in all ISPs and telecommunication carriers, as well as operators of things like Tor nodes. And "access software provider" brings in all software tools and many non-software tools, including open source projects.
Under 230, those broad definitions are innocuous, because they're only used to provide a safe harbor. An ISP or software provider is immunized if it doesn't actually know about or facilitate specific content. No ISP and almost no software provider has any actual knowledge of what passes through or use its service, let alone editing the content or facilitating its creation, so they get the full safe harbor, with minimal or no actual cost to them. And anyway nobody has been after them on 230-ish issues, so including them doesn't hurt.
Under EARN-IT, those same definitions would be used to impose liability, so now those parties actually get burdens from being inside the definition. That's worse than a repeal of 230. It doesn't just remove a safe harbor; it opens an avenue for positive attack.
This commission could decide that it's a "best practice" for ISPs to block all traffic they can't decrypt. Or it could decide that it's a "best practice" not to provide any non-back-doored encryption software to the public, period.
Or, since those might generate too much political backlash at the start, it could start boiling the frog on the slippery slope by, say, deciding that it's a "best practice" not to facilitate meaningfully anonymous communication, effectively outlawing Tor, I2P, and many standard VPN practices.
Then it could start slowly expanding the scope of that, possibly even managing to creep into banning all non-back-doored encryption, without ever making any sudden jump that might cause a sharp public reaction.
Back on the platform side, over time the rules could easily slide from the expected (and unacceptable) "best practice" of not building any strong encryption into your own product, to the even worse "best practice" of trying to identify and refuse to carry anything that might be encrypted. Start by applying it to messaging, then audio/video conferencing, then file storage... and then you have precedents giving you another avenue to push it all the way to ISPs.
There is no reason that private surveillance camera users should be allowed to have the kind of automated, mass face recognition they're talking about "banning", any more than government users. They're at least as likely to abuse it and even less accountable.
Nobody should be trying to connect names or any other information to any person who just enters a place where a camera happens to be pointed. Nor should anybody be shouldn't be using the video/images from surveillance to build any kind of face database or any other kind of database.
Only in the US would people miss the obvious fact that the impact is the same no matter who runs the system.
I'm having trouble buying the idea that anybody at all thinks the phrase "child porn" carries any implication, or even suggestion, of legality. It's the most famously illegal thing that exists on the Internet.
As for moderation, I will bet that almost all references to "child porn" on the Internet are in text that condemns it and/or discusses what to do to stop it. And if the pedos are in fact openly using the phrase "child porn" all over the place, what happens when they start calling it "CSAM"?
What's the actual difference between "CSAM" and child porn, and why is it important to make the distinction? Seems like another random pointless acronym being thrown around and another random pointless terminology change.
I happened to be playing with the AWS Rekognition demo the other day, and I fed it a bunch of makeup jobs from the CV dazzle site, as well as various other images with "countermeasures" from around the Web.
Given a nice clear picture, it found every single face and every single feature on every face. It also did a good job of identifying age, sex and mood, right through some pretty extreme makeup. Try it out. It's available to the public.
The problem with the countermeasures is that you never know whether the other guy has out-evolved you.
By the way, the good think about Rekognition was that it seems to be crap at actually identifying faces from large groups.
They have a celebrity recognition demo, and it did very poorly on pictures lots of people who are in the headlines... including people who ARE in the database. It spotted Marilyn Monroe in one of her really iconic shots, but not in another perfectly clear shot that it presumably hadn't been trained on. Same thing for Einstein. Turning to the headlines, it misidentified Alexandra Ocasio-Cortez and Greta Thunburg as random minor celebrities I'd never heard of. In turn it identified random minor celibrities, like members of current boy bands, as different random minor celebrities. It does well on heads of state. And both new and very old pictures of Elizabeth II worked. It may also be OK on Really Big Stars of Today (TM). But that's about it.
So I assume it won't really identify a random picture as belonging to somebody in a collection unless said collection has a lot of good, similar pictures of that same person.
Re: Re: Re: Re: I have great hopes for the repeal of 230...
In a peer to peer system, you bring your own, and you pay for it because you want to participate. Yeah, somebody has to sell it to you, but the equipment and software general purpose, you can't tell what any individual is using them for, and anybody can make them.
If necessary, that can be extended to the entire communication infrastructure, but in fact we're not talking about the IP layer of fiber and routers here. We're talking about application layer overlays that can clearly be done peer to peer. Facebook and Google are not infrastructure.
Re: Re: Re: Re: I have great hopes for the repeal of 230...
What I'm saying is that trying to make a profit will prevent them from properly providing the service. It has nothing to do with what they "should" or " should not" do. It's simply not possible to make a buck providing an unattackable service.
... ever bought an expensive cell phone that was locked in to a single carrier?
No. That is, not unless I was absolutely sure I could unlock it without the carrier's help or permission. I've never been wrong about that.
Neither should anybody else.
or an expensive android phone where software updates ceased after 1-2 years?
No, because I've never bought one I couldn't load a custom ROM on.
I have been fucked in 3 to 5 years because of proprietary binary blobs, though. That shit should be illegal.
In fact, it should be illegal to distribute any software without source code. That includes firmware and other software bundled with hardware. It should also be illegal to distribute hardware without full register descriptions, and all other information necessary to write a driver supporting all of its features. And if you have any other "internal" documentation, go ahead and throw that in too.
No exceptions, and fuck your "trade secrets".
And if locking something down so that it will only load signed software is legal at all, there need to be some extremely heavy, legally binding regulations on the conditions under which it is allowed. THat definitely has to include the ability to update software that's gone out of support. In most cases, it should probably also include the ability for the owner of any hardware to take total control of all the software that runs on it.
People should be tolerating this kind of abuse any longer. Not only are we suffering from wasteful obsolescence, and not only are enormous resources constantly wasted by intentionally crippled functionality and intentionally hindered interoperability, but there are massive unfixable security problems in all the shit software and abandonware that's being shoveled out.
Meanwhile, we should be poisoning the market for this crap by mocking anybody who opts in without being absolutely forced. In the specific case of home control, there were perfectly good open alternatives that these idiots could have used instead.
I need to correct that slightly. That news site just turned off the name and I got a message saying "your screen name has been rejected; choose a new one" or something nonspecific like that. I only inferred that they wanted something that looked like a "real name".
I use the name only for commenting on places like this. I have a couple of aliases, although I don't use more than one on the same site. You won't find any of them on my birth certificate. Isn't that technically what a sock puppet is?
I use the name to make it clear to the reader that I'm not associating the comments with my "real world" identity.
Amusingly enough, one news site decided it didn't like the name because it looked obviously fake, and made me choose one that looked like a "real name". The one I chose wasn't, of course, my actual "real name". I can't imagine what they think they're accomplishing with that nonsense.
By the way, although I take strong stances and try to shake up assumptions, I do not write comments that I don't believe, nor do I write comments just to upset people.
I really don't understand what pissed people off about that one, since I would think pretty much everybody would agree with it if they thought for 15 seconds. But maybe it touched some taboo or another. My first guess would be the part about the US Constitution being poorly written.
On the post: EARN ITs Big Knowledge 1st Amendment Problem
Re: Re: There's more than one kind of knowledge
I'm not saying the "platform owners" should say that. I'm saying EVERYBODY should say it.
The problem with asking them to come up with suggestions is that they WILL. And they will claim that their suggestions are workable when they're actually not. And they'll claim that their suggestions don't force disabling security measures when they actually do. And they'll claim that their suggestions don't put people at risk when they actually do.
They will never come up with any suggestions that don't have those problems, because that is not possible. However, every time you manage to argue away one suggestion, they'll reword things a bit, come up with a slightly modified one, and claim this one is the fix. They can do this forever.
... and their message to people who are not closely engaged with the issue will be that they've tried and tried to be reasonable and address the sane people's concerns, but the sane people are unreasonable and hate compromise and won't accept anything at all.
It is incredibly bad strategy to adopt any message that suggests there's could be an acceptable way to do what those people want, because there is not.
On the post: EARN ITs Big Knowledge 1st Amendment Problem
There's more than one kind of knowledge
This article and the linked thread seem to be centered on the kind of "knowledge" where you know "this piece of content over here in this file is child porn".
But there's another kind of knowledge, of the form "I run a social site with 100,000,000 users. It is a practical certainty that there's child porn going through my system.". It's not just "I'm ignoring a real possiblity". It's "I'm sure there's some here; I just don't know exactly where it is". Especially after the first few unrelated cases in which you find some.
That kind of thing really isn't captured by the normal lay idea of "recklessness". And if it falls within some legal definition of recklessness, then it's still at least an extremely strong form, way out near the boundary with actual knowledge... which is probably a boundary that can move given the right kind of bad-law-making case.
I think that the "EARN-IT" people are hoping to be able to go after the second kind of knowledge, and I'm afraid that Smith may not be protection enough.
A bookseller in 1958 who happened to have one "obscene" book could reasonably argue that they didn't know what was in it and also didn't know, or even have any reason to believe, that there was anything like that in their stock at all.
A large social site in 2022 knows there's some child porn in the mix somewhere. I suspect that the proponents are hoping that they can use that as enough scienter to get around Smith completely.
It's true that it's still just as impractical for a site to find every single bit of child porn as it would be for a bookseller to find every "obscene" book... but they can still push for the idea that the First Amendment allows them to require a site to do "everything reasonably possible". Not just because it's supposedly a "best practice". Not just because not doing it would risk not finding child porn. Because the site has actual knowledge that there's a problem on their particular system.
That means they can still try to demand scanning, whether via state law or via some other pass. Scanning, of course, means no effective encryption. They will try to get those in through the back door even if they're not in the bill, and given the subject matter I'd be really worried that they'd win in court.
The right answer, of course, is "Yeah, I'm sure there's some child porn on every major site. Tough". But nobody seems to have the guts to say that.
On the post: Eleventh Circuit Smacks Georgia Sheriff Around For Posting 'Don't Trick Or Treat Here' Signs In Sex Offenders' Yards
Re: More than one way to look at data
Of course they're unnecessary.
Typical interaction on Halloween: kids you don't know, from blocks away, knock on your door and you throw some candy into their bags. The most interaction might be "Who are you? Good job on the costume!". There are dozens of other kids parading by, they often travel in groups, and most of the time these days even the older kids have their parents with them.
Suppose you were the biggest child molester that ever child molested. How exactly would you turn that situation into a molestin'?
On the post: Police Union Sues Kentucky City's Mayor, Claiming New No-Knock Warrant Ban Violates Its Bargaining Agreement
You know what would make things safer?
Not raiding people for possessing or dealing in random substances, that's what. No raid, nobody gets shot. Just repeal the fucking drug laws already.
On the post: Basecamp Bans Politics, An Act That Itself Is Political
Who (the fuck) are these people and why is everybody talking about them all of a sudden?
From all the stuff that's been plastered all over everything I read, I have gleaned the information that they're about a 60-person company in Chicago, and that they had something to do with inflicting Ruby on Rails on the world.
Somehow I'm having trouble caring about them or anything they do...
On the post: UK Child Welfare Agency's Anti-Encryption 'Research' Ignored Everything It Didn't Want To Hear
Why? Lying works in politics.
First you lie to yourself, and convince yourself that some single thing is The Most Important Thing. Then you come up with a bunch of Things to Do, and obviously they Must Be Done if they even might have any effect at all on The Most Important Thing. Even if none of them might have any effect, you still have to do them because Something Must Be Done.
And it doesn't matter how much damage you do elsewhere, because no other issue is The Most Important Thing.
Then you like to everybody else. You exaggerate, you make wild accusations, whatever. If you want to ban mayonnaise, you say that mayonnaise is radioactive. Which you can justify because after all you're dealing with The Most Important Thing here.
And, by the way, anybody who says anything that contradicts your lies, or even doesn't promote your view, is scum. It is Not OK to say that mayonnaise is not in fact radioactive. After all, true or not, the idea that mayonnaise is radioactive might actually convince somebody to ban it, and that's The Most Important Thing.
For these people, protecting children from any exposure to sexuality, especially in relation to adults, is The Most Important Thing. If those same children end up impoverished, oppressed, or dead, well, sorry, that's just not as Important.
On the post: When It Comes To Qualified Immunity, Where Your Rights Were Violated Matters More Than The Fact Your Rights Were Violated
So....
While this QI bullshit in the US is clearly based on egregious judicial activism by the Supremes (and after that a lot of apparently intentional inactivism), let's not forget that Congress could eliminate it at any moment, has had over 50 years to do it, and hasn't done so.
And I'm not a lawyer, but I suspect that individual states could do at least something about it with respect to those officers who operate under their own authority. They haven't done it either.
It seems like there's plenty of blame to go around for this.
Basically everybody in any authority in government is terrified that the world will burn down if cops have to follow rules. Or they think their constituents are. So the dereliction of duty is pretty universal.
On the post: It's Long Past Time To Encrypt The Entire DNS
Sorry, no.
There are two issues here: integrity and confidentiality (aka privacy). These systems are not the answer for either one.
Integrity is best solved end-to-end using DNSSEC. It's absolutely stupid to try to do it using hop-by-hop cryptography; you're trusting every hop not to tamper with the data.
... and just encrypting DNS traffic doesn't solve confidentiality either. It doesn't even improve confidentiality in the large.
... not to mention that it's just psychotic to tunnel a nice simple cacheable protocol like DNS over a horrific tower of hacks like HTTP.
On the post: The EARN IT Act Creates A New Moderator's Dilemma
Re:
... oh, and even if you weren't a company with any significant infrastructure, they could also come after you for providing software for P2P or other decentralized solutions. "Protocols, not platforms" only works if somebody's allowed to provide the software to speak the protocol...
On the post: The EARN IT Act Creates A New Moderator's Dilemma
Hmm. It actually may be worse than that, because it appears to apply beyond what you'd think of as "platforms".
The recklessness and "best practices" requirements are applied to all providers of "interactive computer services". The definition of "interactive computer service" is imported by reference from 230. That definition is:
The part about "system... that enables computer access" sweeps in all ISPs and telecommunication carriers, as well as operators of things like Tor nodes. And "access software provider" brings in all software tools and many non-software tools, including open source projects.
Under 230, those broad definitions are innocuous, because they're only used to provide a safe harbor. An ISP or software provider is immunized if it doesn't actually know about or facilitate specific content. No ISP and almost no software provider has any actual knowledge of what passes through or use its service, let alone editing the content or facilitating its creation, so they get the full safe harbor, with minimal or no actual cost to them. And anyway nobody has been after them on 230-ish issues, so including them doesn't hurt.
Under EARN-IT, those same definitions would be used to impose liability, so now those parties actually get burdens from being inside the definition. That's worse than a repeal of 230. It doesn't just remove a safe harbor; it opens an avenue for positive attack.
This commission could decide that it's a "best practice" for ISPs to block all traffic they can't decrypt. Or it could decide that it's a "best practice" not to provide any non-back-doored encryption software to the public, period.
Or, since those might generate too much political backlash at the start, it could start boiling the frog on the slippery slope by, say, deciding that it's a "best practice" not to facilitate meaningfully anonymous communication, effectively outlawing Tor, I2P, and many standard VPN practices.
Then it could start slowly expanding the scope of that, possibly even managing to creep into banning all non-back-doored encryption, without ever making any sudden jump that might cause a sharp public reaction.
Back on the platform side, over time the rules could easily slide from the expected (and unacceptable) "best practice" of not building any strong encryption into your own product, to the even worse "best practice" of trying to identify and refuse to carry anything that might be encrypted. Start by applying it to messaging, then audio/video conferencing, then file storage... and then you have precedents giving you another avenue to push it all the way to ISPs.
On the post: Cambridge, Massachusetts Passes Ban On Facial Recognition Tech Use By Government Agencies
"Ban", eh?
That's a pretty lame excuse for a ban.
There is no reason that private surveillance camera users should be allowed to have the kind of automated, mass face recognition they're talking about "banning", any more than government users. They're at least as likely to abuse it and even less accountable.
Nobody should be trying to connect names or any other information to any person who just enters a place where a camera happens to be pointed. Nor should anybody be shouldn't be using the video/images from surveillance to build any kind of face database or any other kind of database.
Only in the US would people miss the obvious fact that the impact is the same no matter who runs the system.
On the post: Lindsey Graham's Sneak Attack On Section 230 And Encryption: A Backdoor To A Backdoor?
I'm having trouble buying the idea that anybody at all thinks the phrase "child porn" carries any implication, or even suggestion, of legality. It's the most famously illegal thing that exists on the Internet.
As for moderation, I will bet that almost all references to "child porn" on the Internet are in text that condemns it and/or discusses what to do to stop it. And if the pedos are in fact openly using the phrase "child porn" all over the place, what happens when they start calling it "CSAM"?
On the post: Lindsey Graham's Sneak Attack On Section 230 And Encryption: A Backdoor To A Backdoor?
"CSAM"?
What's the actual difference between "CSAM" and child porn, and why is it important to make the distinction? Seems like another random pointless acronym being thrown around and another random pointless terminology change.
On the post: London Police Move Forward With Full-Time Deployment Of Facial Recognition Tech That Can't Accurately Recognize Faces
Re: Re: creative makeup
I happened to be playing with the AWS Rekognition demo the other day, and I fed it a bunch of makeup jobs from the CV dazzle site, as well as various other images with "countermeasures" from around the Web.
Given a nice clear picture, it found every single face and every single feature on every face. It also did a good job of identifying age, sex and mood, right through some pretty extreme makeup. Try it out. It's available to the public.
The problem with the countermeasures is that you never know whether the other guy has out-evolved you.
By the way, the good think about Rekognition was that it seems to be crap at actually identifying faces from large groups.
They have a celebrity recognition demo, and it did very poorly on pictures lots of people who are in the headlines... including people who ARE in the database. It spotted Marilyn Monroe in one of her really iconic shots, but not in another perfectly clear shot that it presumably hadn't been trained on. Same thing for Einstein. Turning to the headlines, it misidentified Alexandra Ocasio-Cortez and Greta Thunburg as random minor celebrities I'd never heard of. In turn it identified random minor celibrities, like members of current boy bands, as different random minor celebrities. It does well on heads of state. And both new and very old pictures of Elizabeth II worked. It may also be OK on Really Big Stars of Today (TM). But that's about it.
So I assume it won't really identify a random picture as belonging to somebody in a collection unless said collection has a lot of good, similar pictures of that same person.
On the post: Time Magazine Explains Why Section 230 Is So Vital To Protecting Free Speech
Re: Re: Re: Re: I have great hopes for the repeal of 230...
In a peer to peer system, you bring your own, and you pay for it because you want to participate. Yeah, somebody has to sell it to you, but the equipment and software general purpose, you can't tell what any individual is using them for, and anybody can make them.
If necessary, that can be extended to the entire communication infrastructure, but in fact we're not talking about the IP layer of fiber and routers here. We're talking about application layer overlays that can clearly be done peer to peer. Facebook and Google are not infrastructure.
On the post: Time Magazine Explains Why Section 230 Is So Vital To Protecting Free Speech
Re: Re: Re: Re: I have great hopes for the repeal of 230...
What I'm saying is that trying to make a profit will prevent them from properly providing the service. It has nothing to do with what they "should" or " should not" do. It's simply not possible to make a buck providing an unattackable service.
On the post: Spectrum Customers Stuck With Thousands In Home Security Gear They Can't Use
Re: Re: others
No. That is, not unless I was absolutely sure I could unlock it without the carrier's help or permission. I've never been wrong about that.
Neither should anybody else.
No, because I've never bought one I couldn't load a custom ROM on.
I have been fucked in 3 to 5 years because of proprietary binary blobs, though. That shit should be illegal.
In fact, it should be illegal to distribute any software without source code. That includes firmware and other software bundled with hardware. It should also be illegal to distribute hardware without full register descriptions, and all other information necessary to write a driver supporting all of its features. And if you have any other "internal" documentation, go ahead and throw that in too.
No exceptions, and fuck your "trade secrets".
And if locking something down so that it will only load signed software is legal at all, there need to be some extremely heavy, legally binding regulations on the conditions under which it is allowed. THat definitely has to include the ability to update software that's gone out of support. In most cases, it should probably also include the ability for the owner of any hardware to take total control of all the software that runs on it.
People should be tolerating this kind of abuse any longer. Not only are we suffering from wasteful obsolescence, and not only are enormous resources constantly wasted by intentionally crippled functionality and intentionally hindered interoperability, but there are massive unfixable security problems in all the shit software and abandonware that's being shoveled out.
Meanwhile, we should be poisoning the market for this crap by mocking anybody who opts in without being absolutely forced. In the specific case of home control, there were perfectly good open alternatives that these idiots could have used instead.
On the post: California Governor Signs Bill Banning Facial Recognition Tech Use By State's Law Enforcement Agencies
Good first step. Now ban all use of it by everybody. There's nothing magically different about state surveillance.
On the post: Phew: EU Court Of Justice Says Right To Be Forgotten Is Not A Global Censorship Tool (Just An EU One)
Re: Re: Re: Re: Re: Re: Re: Re:
I need to correct that slightly. That news site just turned off the name and I got a message saying "your screen name has been rejected; choose a new one" or something nonspecific like that. I only inferred that they wanted something that looked like a "real name".
On the post: Phew: EU Court Of Justice Says Right To Be Forgotten Is Not A Global Censorship Tool (Just An EU One)
Re: Re: Re: Re: Re: Re: Re:
I use the name only for commenting on places like this. I have a couple of aliases, although I don't use more than one on the same site. You won't find any of them on my birth certificate. Isn't that technically what a sock puppet is?
I use the name to make it clear to the reader that I'm not associating the comments with my "real world" identity.
Amusingly enough, one news site decided it didn't like the name because it looked obviously fake, and made me choose one that looked like a "real name". The one I chose wasn't, of course, my actual "real name". I can't imagine what they think they're accomplishing with that nonsense.
By the way, although I take strong stances and try to shake up assumptions, I do not write comments that I don't believe, nor do I write comments just to upset people.
I really don't understand what pissed people off about that one, since I would think pretty much everybody would agree with it if they thought for 15 seconds. But maybe it touched some taboo or another. My first guess would be the part about the US Constitution being poorly written.
Next >>