Not helping to spread it yourself (ie, not hosting it on your website) is another step.
Not helping to support those making the speech (ie, not buying products from them) is another step.
You need all of these. Handing over millions to someone while telling them that they're factually incorrect isn't really going to do much to stop them. Telling them personally that you think they're wrong while helping spread their harmful ideas to millions more certainly isn't helping anything either.
"'Deplatforming' a user from a site like Twitter is one thing; booting the entire platform from a service like Cloudflare is an entirely different ballgame. That leads into the broader question being asked here: At what point does a decision from a company like Cloudflare to boot a site like 8chan from the company’s service become legitimate censorship?"
You have that exactly backwards IMO.
There's a LOT of people I never talked to again once I dropped Facebook. That's the only platform they use, and if I'm not on that platform, I have no way to communicate with the people who are. It's a closed ecosystem.
But booting someone off Cloudflare? No big deal. Basically the entire goal of Cloudflare is that the end user can't tell if you're using it or not, the website just works. Build a website, then go Cloudflare, then get booted from Cloudflare, self-host for a little while, then migrate to another cloud provider...to your users, your site might get a bit faster or a bit slower through those transitions, but it will still work, it will still be accessible, they can still read your speech without doing anything different. If you do it well, they won't notice a single thing changing. A website getting booted from a CDN like Cloudflare is far less damaging then getting booted off a quasi-public platform like Twitter or Facebook. You get booted from Facebook or Twitter...your page is gone, your user connections are gone, your historical data is gone, everything gets purged and you aren't even allowed to start over.
"The tricky part, of course, is defining what is "harm." There can be negative actions that do not bring about harm. There can be positive actions that do bring about harm. Harm can include mental harm, such as fear or loathing (and not just in Las Vegas). And actions that could be harmful in the presence of one person could be perfectly benign in the presence of another. Not to mention that I might think someone else is being harmed even though they do not feel that way. Where lies the boundary between actual harm, and merely disagreement?"
That is indeed the difficulty...and I would add one other consideration, as in addition to people being harmed even though they don't feel that way, there are plenty of people who will argue that they are being harmed even when they are not.
Although...I suppose it could also be argued that the mere fact that you are considering taking some action in response implies that you are being harmed in some way. Is being annoyed "harm"? Certainly not physically, but mentally? How do we draw that line?
I think a better question would be: "Is it harming me more than it is helping them?" But even that is tricky, and I think it ought to be weighted so that it is closer to "Would it harm a reasonable person more than it is helping them?". If you work night shift, you're the one outside of the average and it is more reasonable for you to invest in earplugs than to ask all of your neighbors to not mow their lawns during the day, regardless of how unnecessary that lawn mowing might ultimately be. Unless they're running that mower for hours every day.
But then there must be a strong component of "what is typical in this society"...but unfortunately then you get into race/class/ethnicity issues, as what is normal for you may not be normal for the family next door...and what is no big deal to you might be a significant harm to them.
There is no rule which can be applied, there is no algorithm which can determine the solution...what is required is a good dose of compassion and empathy. You need to understand both sides of the issue, not only your own.
...at which point Cloudflare would go bankrupt and someone else would replace them. You realize that Cloudflare isn't a government, right? Seems like a lot of people in this thread talking about free speech rights don't seem to get that part. They aren't locking people up at gunpoint, they're just refusing to speak things that they don't want to speak. They aren't taking money at gunpoint to keep themselves in business either; if you don't want to support them, then don't.
So, are you going to head down to your nearest Nazi meeting and help them hand out flyers? Are you going to stand on the street corner with a megaphone screaming their message for them? Because you seem to be saying that Cloudflare should do exactly that.
There's a difference between actively censoring a message and just not helping to spread it further.
No, the reason it exists is because there is no single entity that can be trusted for all eternity with power over what ideas people may express. Every time that power has existed, it has been abused.
Your suggestion empowers Nazis by giving them one more reason to defend their speech, while diempowering the rest of us by giving one more reason not to fight back. Stopping that speech IS the correct response, it's just not a response that we can trust to be taken responsibly by the government.
Very true, and I think it's also worth pointing out that the piercing that illusion is not a binary condition. Someone who seriously intends to hijack a plane is probably researching those scanners and screening policies more than the average citizen, and a large terrorist organization probably wouldn't mind if the TSA still caught 90% of their attempts -- they can just send twenty people. You think this whistleblower changes anything after they've been seeing people sneaking knives and hacksaws and loaded firearms and thermite and igniters through these kinds of screenings for YEARS already? By the time "everyone knows" it's useless, the few people who matter likely suspected they were sufficiently useless for quite a long time.
The reason we haven't seen another successful hijacking recently is because nobody with any notable level of resources has tried. Or if they have, their plans did not reach the point of actually getting to an airport.
It doesn't matter how they FEEL about it, it WILL slow them down. Then they have to either pay for more guards (and they never like paying for things) or slowly dismantle one part of the system to keep another part running (as described in this article) -- and that may include dismantling the scanners if they decide that the opt-outs are the least important thing to be dealing with.
Re: 'Never let a good tragedy/victimization go to waste'
The other thing to keep in mind is that these "best practices" are being written and required by the same government who has proven multiple times that it is unable to keep this kind of filth off THEIR OWN networks. So either they can't obey their own best practices, or those best practices don't actually do anything to help.
So...I've got some friends on Cox, who get disconnected every couple months, and they call up customer service and give some story about "Piracy? What's that? Secure Wifi? I don't know what any of that means!"...and then they get reconnected. Meanwhile, there's people who download CONSTANTLY on Verizon and have never heard a word about it. So I assume Verizon's policy is essentially "Come back when you've got a court order"?
It says here that a large part of the problem is that Cox did not obey their own policy for dealing with repeat infringers. Sounds like they might not have lost if their policy was "That's not our problem". They chose to become an enforcement agency, and they got sued for failing to do that job well enough. That's what you get when you try to help a group of crooks like the RIAA...
I figured they wanted a jury verdict because nearly every living American despises the cable companies and would be eager to exploit this opportunity for revenge. I'm sure the judge hates Cox just as much, but he's got a bigger obligation to remain "professional".
Granted, most of us despise the RIAA too, but they have less name recognition.
"the SF legislature has already amended its ban to allow city use of smartphones with biometric security features"
"Municipal agencies are once again allowed to procure devices that utilize facial recognition tech as long as they're "critically necessary" and there are no other alternatives."
Considering that there are MANY ways to lock and unlock a phone, and the facial-recognition is certainly not the most secure option...did they add the "critically necessary and no other alternatives" exception and then add a SECOND exception for the "smart""phone" "security" garbage?
Typical of both government and corporations these days though...ram through a half-baked measure with insufficient analysis, then ram through some further ammendments once you realize how thoroughly you've screwed yourself. Take a principled stance, until it gets marginally inconvenient, and then those principles get thrown right out the window....
...compared to America, where I pay a few hundred a month, in addition to what my employer pays, for private health insurance that I've never once been able to use? I call a doctor and say I need a flu test or I need to get an infection looked at, and they tell me "next available appointment is in eight months." Utterly useless. So I pay a few thousand a year for insurance purely so I don't go bankrupt if I'm in some catastrophic accident, and then I pay a few hundred again out of pocket any time I have any actual healthcare needs because that's the only way to get something treated without waiting in line for a year...instead of seeing an actual doctor I'm ordering blood tests online and getting meds prescribed by some guy in a call center in Georgia. A six figure income still won't get you halfway decent healthcare in this country...
No, nobody can explain it because Techdirt has no policy governing when or why comments get hidden. The mob didn't like it, and that's the only thing that matters here.
Much like cops don't want people to find out through social media when they shoot someone, and politicians don't want people to find out through social media if they're caught taking a bribe, and college students don't want (certain) people to find out through social media if they're drunk and stupid at a party, and criminals don't want anyone posting the surveillance footage of them.. And none of those are illegal either. That is not a sufficient basis to prohibit something.
"But if it was posted on social media - and this goes for anything, that removes the person who is photographed from any decision on who/what/where that picture is disceminated to. Another person viewing the picture doesn't have the facts - just what the person posting it wants to portay (good or bad)"
That would seem to cover literally any photo that isn't a selfie or a completely human-free landscape. And even selfies if there's anyone visible in the background, or if you're dong a selfie with someone else. If that's the line you use to determine what is legal, you'd be turning nearly everyone with a camera into a criminal.
There's different kinds of moderation, and plenty of sites DO use mid-stream moderation that requires access. Facebook, for example.
I like the idea posted by AC below where they suggest moderation vs filtering, although those words already have different uses so they probably aren't the best choice. I'd call it something like "policy moderation" vs "user moderation". Policy moderation is like Facebook, where you set a bunch of rules about what is and is not allowed, you let users file reports of specific content, but then you have hired moderators who review that content and determine if it is actually in violation. Some sites also use immediate policy moderation, where your post will be reviewed by a human to see if it complies before it is ever visible. Some sites use a mix, with automated filters which will determine if a comment should be held for human review. But all of those options require administrators at the company to be able to review the posted content. So either the company needs to be able to decrypt everything, or at the very least they need to insert code that will take the decrypted message from the user and pass that back to the company unencrypted. Either way they're getting unencrypted access. And obviously you can't count on any automatic filtering on the client end -- for example, if you do the thing where automatic filtering can flag a comment as requiring human review, the client can easily prevent that code from running on their end. You can use that to prevent things from being viewed, but not from being posted and distributed.
For "user moderation", you just count downvotes and hide anything with enough downvotes. That could be done without direct access by the company to the decrypted content. But it doesn't let you set any kind of consistent rules, and it can often get abused, especially in larger communities. Things will get flagged because people just don't like the opinion expressed or the person expressing it...and there's not much you can do to prevent that.
Telecom is already working -- fairly successfully -- to prevent that as well. Some states already have laws on the books prohibiting public broadband. Just search "municipal broadband" right here on Techdirt and you'll find plenty of stories about it.
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re: Re: Re: Re: Not so funny now is it?
That is one step to stopping the speech.
Not helping to spread it yourself (ie, not hosting it on your website) is another step.
Not helping to support those making the speech (ie, not buying products from them) is another step.
You need all of these. Handing over millions to someone while telling them that they're factually incorrect isn't really going to do much to stop them. Telling them personally that you think they're wrong while helping spread their harmful ideas to millions more certainly isn't helping anything either.
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re: Re: Re: Re: That feeling of ambivalence
So are you just posting irrelevant statements, or did Cloudflare become a government at some point and I missed it?
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re:
"'Deplatforming' a user from a site like Twitter is one thing; booting the entire platform from a service like Cloudflare is an entirely different ballgame. That leads into the broader question being asked here: At what point does a decision from a company like Cloudflare to boot a site like 8chan from the company’s service become legitimate censorship?"
You have that exactly backwards IMO.
There's a LOT of people I never talked to again once I dropped Facebook. That's the only platform they use, and if I'm not on that platform, I have no way to communicate with the people who are. It's a closed ecosystem.
But booting someone off Cloudflare? No big deal. Basically the entire goal of Cloudflare is that the end user can't tell if you're using it or not, the website just works. Build a website, then go Cloudflare, then get booted from Cloudflare, self-host for a little while, then migrate to another cloud provider...to your users, your site might get a bit faster or a bit slower through those transitions, but it will still work, it will still be accessible, they can still read your speech without doing anything different. If you do it well, they won't notice a single thing changing. A website getting booted from a CDN like Cloudflare is far less damaging then getting booted off a quasi-public platform like Twitter or Facebook. You get booted from Facebook or Twitter...your page is gone, your user connections are gone, your historical data is gone, everything gets purged and you aren't even allowed to start over.
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re:
"The tricky part, of course, is defining what is "harm." There can be negative actions that do not bring about harm. There can be positive actions that do bring about harm. Harm can include mental harm, such as fear or loathing (and not just in Las Vegas). And actions that could be harmful in the presence of one person could be perfectly benign in the presence of another. Not to mention that I might think someone else is being harmed even though they do not feel that way. Where lies the boundary between actual harm, and merely disagreement?"
That is indeed the difficulty...and I would add one other consideration, as in addition to people being harmed even though they don't feel that way, there are plenty of people who will argue that they are being harmed even when they are not.
Although...I suppose it could also be argued that the mere fact that you are considering taking some action in response implies that you are being harmed in some way. Is being annoyed "harm"? Certainly not physically, but mentally? How do we draw that line?
I think a better question would be: "Is it harming me more than it is helping them?" But even that is tricky, and I think it ought to be weighted so that it is closer to "Would it harm a reasonable person more than it is helping them?". If you work night shift, you're the one outside of the average and it is more reasonable for you to invest in earplugs than to ask all of your neighbors to not mow their lawns during the day, regardless of how unnecessary that lawn mowing might ultimately be. Unless they're running that mower for hours every day.
But then there must be a strong component of "what is typical in this society"...but unfortunately then you get into race/class/ethnicity issues, as what is normal for you may not be normal for the family next door...and what is no big deal to you might be a significant harm to them.
There is no rule which can be applied, there is no algorithm which can determine the solution...what is required is a good dose of compassion and empathy. You need to understand both sides of the issue, not only your own.
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re:
...at which point Cloudflare would go bankrupt and someone else would replace them. You realize that Cloudflare isn't a government, right? Seems like a lot of people in this thread talking about free speech rights don't seem to get that part. They aren't locking people up at gunpoint, they're just refusing to speak things that they don't want to speak. They aren't taking money at gunpoint to keep themselves in business either; if you don't want to support them, then don't.
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re: Re: That feeling of ambivalence
So, are you going to head down to your nearest Nazi meeting and help them hand out flyers? Are you going to stand on the street corner with a megaphone screaming their message for them? Because you seem to be saying that Cloudflare should do exactly that.
There's a difference between actively censoring a message and just not helping to spread it further.
On the post: Cloudflare Removes Warrant Canary: Thoughtful Post Says It Can No Longer Say It Hasn't Removed A Site Due To Political Pressure
Re: Re: Not so funny now is it?
No, the reason it exists is because there is no single entity that can be trusted for all eternity with power over what ideas people may express. Every time that power has existed, it has been abused.
Your suggestion empowers Nazis by giving them one more reason to defend their speech, while diempowering the rest of us by giving one more reason not to fight back. Stopping that speech IS the correct response, it's just not a response that we can trust to be taken responsibly by the government.
On the post: Whistleblower Accidentally Demonstrates How Much Of The TSA's Security Efforts Is Pure Theater
Re:
Very true, and I think it's also worth pointing out that the piercing that illusion is not a binary condition. Someone who seriously intends to hijack a plane is probably researching those scanners and screening policies more than the average citizen, and a large terrorist organization probably wouldn't mind if the TSA still caught 90% of their attempts -- they can just send twenty people. You think this whistleblower changes anything after they've been seeing people sneaking knives and hacksaws and loaded firearms and thermite and igniters through these kinds of screenings for YEARS already? By the time "everyone knows" it's useless, the few people who matter likely suspected they were sufficiently useless for quite a long time.
The reason we haven't seen another successful hijacking recently is because nobody with any notable level of resources has tried. Or if they have, their plans did not reach the point of actually getting to an airport.
On the post: Whistleblower Accidentally Demonstrates How Much Of The TSA's Security Efforts Is Pure Theater
Re: Re: Listen up sheep
It doesn't matter how they FEEL about it, it WILL slow them down. Then they have to either pay for more guards (and they never like paying for things) or slowly dismantle one part of the system to keep another part running (as described in this article) -- and that may include dismantling the scanners if they decide that the opt-outs are the least important thing to be dealing with.
On the post: Having Learned Absolutely Nothing From The Failures Of FOSTA, Senators Graham & Blumenthal Prep FOSTA 2.0
Re: 'Never let a good tragedy/victimization go to waste'
The other thing to keep in mind is that these "best practices" are being written and required by the same government who has proven multiple times that it is unable to keep this kind of filth off THEIR OWN networks. So either they can't obey their own best practices, or those best practices don't actually do anything to help.
On the post: Insanity (AKA Copyright Statutory Damages) Rules: Cox Hit With $1 Billion (With A B) Jury Verdict For Failing To Magically Stop Piracy
"Come back with a warrant?"
So...I've got some friends on Cox, who get disconnected every couple months, and they call up customer service and give some story about "Piracy? What's that? Secure Wifi? I don't know what any of that means!"...and then they get reconnected. Meanwhile, there's people who download CONSTANTLY on Verizon and have never heard a word about it. So I assume Verizon's policy is essentially "Come back when you've got a court order"?
It says here that a large part of the problem is that Cox did not obey their own policy for dealing with repeat infringers. Sounds like they might not have lost if their policy was "That's not our problem". They chose to become an enforcement agency, and they got sued for failing to do that job well enough. That's what you get when you try to help a group of crooks like the RIAA...
On the post: Insanity (AKA Copyright Statutory Damages) Rules: Cox Hit With $1 Billion (With A B) Jury Verdict For Failing To Magically Stop Piracy
Re:
I figured they wanted a jury verdict because nearly every living American despises the cable companies and would be eager to exploit this opportunity for revenge. I'm sure the judge hates Cox just as much, but he's got a bigger obligation to remain "professional".
Granted, most of us despise the RIAA too, but they have less name recognition.
On the post: San Francisco Amends Facial Recognition Ban After Realizing City Employees Could No Longer Use Smartphones
I fail to see the problem
"the SF legislature has already amended its ban to allow city use of smartphones with biometric security features"
"Municipal agencies are once again allowed to procure devices that utilize facial recognition tech as long as they're "critically necessary" and there are no other alternatives."
Considering that there are MANY ways to lock and unlock a phone, and the facial-recognition is certainly not the most secure option...did they add the "critically necessary and no other alternatives" exception and then add a SECOND exception for the "smart""phone" "security" garbage?
Typical of both government and corporations these days though...ram through a half-baked measure with insufficient analysis, then ram through some further ammendments once you realize how thoroughly you've screwed yourself. Take a principled stance, until it gets marginally inconvenient, and then those principles get thrown right out the window....
On the post: Tennessee Deputy Who Baptised An Arrestee And Strip Searched A Minor Now Dealing With 44 Criminal Charges And Five Lawsuits
Re: Re: How many Investigations before you are benched?
And the desk, being part of the police department, surely takes priority over any of those other plebs!
On the post: Trump Administration Demands An End To Strong Encryption While Being Exhibit A For Why We Need It
Re: Re:
...compared to America, where I pay a few hundred a month, in addition to what my employer pays, for private health insurance that I've never once been able to use? I call a doctor and say I need a flu test or I need to get an infection looked at, and they tell me "next available appointment is in eight months." Utterly useless. So I pay a few thousand a year for insurance purely so I don't go bankrupt if I'm in some catastrophic accident, and then I pay a few hundred again out of pocket any time I have any actual healthcare needs because that's the only way to get something treated without waiting in line for a year...instead of seeing an actual doctor I'm ordering blood tests online and getting meds prescribed by some guy in a call center in Georgia. A six figure income still won't get you halfway decent healthcare in this country...
On the post: The Cable Industry Makes $28 Billion Annually In Bullshit Fees
Re: Re: Re:
Unfortunately, the telecom companies have already had great success in banning that pre-emptively...
On the post: Not The First Rodeo: Lil Nas X And Cardi B Hit With Blurred Lines Style Copyright Complaint Over Rodeo
Re: Re: Re:
No, nobody can explain it because Techdirt has no policy governing when or why comments get hidden. The mob didn't like it, and that's the only thing that matters here.
On the post: Appeals Court Denies Qualified Immunity For Transit Cop Who Arrested A Journalist For Taking Pictures Of EMS Personnel
Re: Re: Re: Re: so...
Much like cops don't want people to find out through social media when they shoot someone, and politicians don't want people to find out through social media if they're caught taking a bribe, and college students don't want (certain) people to find out through social media if they're drunk and stupid at a party, and criminals don't want anyone posting the surveillance footage of them.. And none of those are illegal either. That is not a sufficient basis to prohibit something.
"But if it was posted on social media - and this goes for anything, that removes the person who is photographed from any decision on who/what/where that picture is disceminated to. Another person viewing the picture doesn't have the facts - just what the person posting it wants to portay (good or bad)"
That would seem to cover literally any photo that isn't a selfie or a completely human-free landscape. And even selfies if there's anyone visible in the background, or if you're dong a selfie with someone else. If that's the line you use to determine what is legal, you'd be turning nearly everyone with a camera into a criminal.
On the post: The DOJ Is Conflating The Content Moderation Debate With The Encryption Debate: Don't Let Them
Re: Re: Out of sight, out of their minds
There's different kinds of moderation, and plenty of sites DO use mid-stream moderation that requires access. Facebook, for example.
I like the idea posted by AC below where they suggest moderation vs filtering, although those words already have different uses so they probably aren't the best choice. I'd call it something like "policy moderation" vs "user moderation". Policy moderation is like Facebook, where you set a bunch of rules about what is and is not allowed, you let users file reports of specific content, but then you have hired moderators who review that content and determine if it is actually in violation. Some sites also use immediate policy moderation, where your post will be reviewed by a human to see if it complies before it is ever visible. Some sites use a mix, with automated filters which will determine if a comment should be held for human review. But all of those options require administrators at the company to be able to review the posted content. So either the company needs to be able to decrypt everything, or at the very least they need to insert code that will take the decrypted message from the user and pass that back to the company unencrypted. Either way they're getting unencrypted access. And obviously you can't count on any automatic filtering on the client end -- for example, if you do the thing where automatic filtering can flag a comment as requiring human review, the client can easily prevent that code from running on their end. You can use that to prevent things from being viewed, but not from being posted and distributed.
For "user moderation", you just count downvotes and hide anything with enough downvotes. That could be done without direct access by the company to the decrypted content. But it doesn't let you set any kind of consistent rules, and it can often get abused, especially in larger communities. Things will get flagged because people just don't like the opinion expressed or the person expressing it...and there's not much you can do to prevent that.
On the post: Former FCC Boss Wheeler Says New Court Ruling Won't Stop Net Neutrality
Re: Re: ISPs' solution
Telecom is already working -- fairly successfully -- to prevent that as well. Some states already have laws on the books prohibiting public broadband. Just search "municipal broadband" right here on Techdirt and you'll find plenty of stories about it.
Next >>