getting rid of all content moderation will enable far right voices to yell over everyone unchallenged and flood every outlet which allows user interaction with hatespeech.
We'll see if their drivel can keep it's head above water, once every social media site gets deluged with free Ray-Bans, premium blue pills, and requests for money by Nigerian princes!
Heather's comparison of the online harms framework to health and safety law makes a lot of sense. Health and safety has long been a vector for culture-warrioring here in the UK.
Back when I was at university, there were various local controversies over licensing Houses in Multiple Occupation (HMO). HMO laws were brought in in response to a fatal fire in 2000, in which two students were killed. The rules set a licensing regime, designed to set minimum standards for shared houses (fire alarms, safety checks on gas appliances, etc).
Licenses were done by local councils. They were supposed to be granted based on "does this house meet the legal standards for a HMO?"
instead, virtually every HMO licensing in my university town was bombarded by complaints of students playing loud music. Students vomiting in gardens. Students besmerching the community by their presence. etc. And then these licenses being refused for completely illegitimate reasons, completely undermining the policy objectives of the HMO law!
I'm seeing the same dynamic play out when discussing online harms. No interested in coherent, workable policy objectives, addressing the overwhelming benefits of the free internet against measurable harms. Instead it's just a parade of horribles about teenage suicide, homophobic and racist bigotry, and paedophiles using Whatsapp. While having nothing to say about the vast, vast majority of law-abiding internet users, and how these restrictions would affect them!
I for one appreciate these case studies. They're a great resource to demonstrate what Mike's been saying for ages about the impossibility of perfect content moderation.
I hope you're going to cover the recent hoo-ha around GameStop shares. Including Robinhood (and other platforms) stopping trades. As well as Google suspending reviews over so-called "review bombing"
Moderation, with a side of possible financial market manipulation.
... Does your service have critical infrastructure, or staff located in the country concerned? Hopefully this had been risk-assessed in advance, given the trend of censorious regimes holding both of these hostage as part of takedown disputes.
Ultimately, you can't moderate to the lowest common denominator. And the Razak regime was particularly corrupt, thuggish and censorious.
A service that cares at all about free expression of ideas, is never going to sit well with a government that believes in stealing from it's own people, and censoring all their critics!
I'm increasingly convinced that Sandberg and Zuckerberg's relationship is akin to the relationship between Homelander and Madelyn Stillwell, in Amazon Prime series 'The Boys'.
...but if you focus that sunlight through a magnifying glass it ends up burning anything under it's glare. To the point that everything looks like a rules violation.
What probably happened is triggered assholes decided they didn't much care for being called colossal assholes . They encouraged their credulous asshole followers (either explicitly, or by implication) to bombard Twitter with reports.
Twitter, seeing the scale of the reporting activity, starts looks overly closely at Cory's Tweets. They decide that the post technically violates the TOS in relation to "Harassment" "Trolling" or whatever other subjective buzzword appears in said TOS. The fact that a large volume of reports are coming in at the same time tends to back up this interpretation. Even though 99.9% of genuine users acting in good faith wouldn't view the post in this way.
Thereby, the mendacious assholes weaponize the TOS and achieve their objective of getting a prominent critic banned!
A similar thing happened last year to a British anti-racism activist I follow . They participated in a publicity stunt against prominent racist and football thug Tommy Robinson. The response was a mass flagging campaign against all his social media (along with trolling, death threats, and similar). His old tweets were trawled for anything that even vaguely violated Twitter's TOS. Each of which would never, on it's own, be worthy of being reported by most of his actual followers. But when all reported in such concentration, lead to a permanent twitter ban for the individual concerned!
The people calling for more severe social media moderation of assholes, often fail to understand that moderation often ends up utilized by said assholes as a tool of their assholery!
Covington High School controversy. They followed two strategies in response to the cropped video of the "MAGA Hat boy"
First strategy; they posted the full video of the incident. They told their side of the story online and on TV(Admittedly not something most ordinary people get to do). And while some commentators changed tack to whine about "white privilege" and various tangential complaints (like rich kids affording a PR firm), the original accusations were pretty thoroughly debunked.
Second strategy: they sued the Washington Post for Defamation. Their complaint was dismissed straight away. The end.
Now tell me. Which of those strategies was more effective?
Contingencies to deal with the absolute shit-show of a no-deal Brexit. Which will happen if the Govt, Parliament, and the rest of the EU can't agree a deal before ... checks notes 54 DAYS from now. (I need to stock up on canned food).
Among other things, we will have no legal framework under which to import or export goods to the rest of the EU. No legal framework to allow travel into our out of the rest of the EU. No legal framework to prevent Ireland from descending back into bloody violence when they try and impose a hard border. Etc. etc.
But it's all good! Among contingency planning efforts revealed so far, the government gave £13 million to a company called Seaborn Freight to set up a ferry service. A company which incorporated less than a year ago. And owns no ships. Or any assets of any description. And whose website's terms of service was stolen from a pizza delivery company....
The rights industry would like us to believe that stream ripping is a tool for pirates. "Why would you need to download a stream?" They claim. " After all, you can watch the streams, on demand, any time you want."
And for a while, we might have agreed. "Well, I guess I don't really need a local copy. My hard drive has limited space after all. And it's basically all there on demand." It was a covenant of convenience.
Then this week, Machinima happened.
Nearly a decade of influential, meaningful, trailblazing video content. All of it memory-holed on the whim of some corporation, who had no hand in its creation.
So well done corporate America. Well done for transforming stream ripping from the bogeyman you claim it is, into the moral imperative to preserving internet culture!
Potentially it could be. Especially now Facebook has served a legal notice that this kind of bullshit violates their ToS. Some court cases seem to have held that accessing a system in violation of the terms of service constitutes "unauthorized access" for the purpose of the act.
I wouldn't be surprised if there's some kind of law enforcement exception though.
"...It would make those who run large online forums accountable for the material they publish"
Which is basically already the law!
Intermediary protections are much weaker than the USA. So While platforms benefit from some (limited) notice-and-takedown requirements, if they're notified of defamatory or illegal content they become responsible if it isn't taken down promptly.
What would this bill achieve, other than re stating the existing UK law, while being unenforceable against any services operating abroad?
The "thirteen strike" policy was a rule on paper only, as pointed out by the article.
But in addition to that, the appeal court pointed out that Cox wasn't processing __Any__ of the notices that Rights Corp sent to it. They apparently took umbrage with the wording of the notices (which included "settlement agreements"). Rights Corp continued to send them unaltered. So Cox started filing all their notices straight to the recycling bin!
I might have issues with a lawyer. But if I get a valid legal notice from said lawyer, and decided to ignore it because they're an asshole, or they don't wash, I shouldn't be surprised if the court to takes a dim view.
"this ruling allows judges an immense amount of leeway in evaluating any aspect of a story to determine if part of the story has an adequate level of utility."
That's not new. Judges in the UK are required to conduct that exercise when balancing different convention rights. In this case, the right to privacy (under article 8 of the Human Rights Convention), versus the right to freedom of expression (under article 10).
In contrast to the US, where judges engaging in content-based analysis of speech is nearly anathama under the First Amendment.
Section 1.1 of the Communications Act 1988 does indeed make it an offense to send communications where the intention is to cause distress or anxiety.
But, subsections a) and b) that same paragraph specify that the nature of the message has to be either indecent, grossly offensive, a threat, or knowingly false information. Only if that's triggered does the 'distress and anxiety' part kick in.
Grossly offensive or indecent? Perhaps these police live in some puritanical age of history, but the idea of drug use being offensive (Grossly or otherwise) is fanciful. A threat? Please! False information? I don't know what planet they're on if they think a couple of roll-ups is a substantial drug bust; irrespective, it's clearly subjective opinion.
The real problem is the general attitude of the UK police to free speech. Sadly, there's little that can be done. While there's a potential human rights claim under Section 10 of the Human Rights Act, that law gives a wide (too wide) margin of appreciation, especially when it comes to law enforcement. One place where the USA is miles ahead of us :(
The decision making tool inevitably becomes the decision-maker
This happened before with the roll-out of medical assessments for recipients of Employment Support Allowance (A welfare benefit in the UK for people unable to work due to health needs). They introduced a medical assessment to assess whether a claimant was fit for work.
The validity and accuracy of these assessments has been subject to intense criticism over the years, for reasons I won't go into now. But in theory, this assessment was only supposed to be one piece of evidence. The final decision rested with Department for Work and Pensions.
In reality, the decision-makers at the DWP are not empowered to deviate from the outcome of the assessment, irrespective of any independent medical evidence submitted by the claimant. So the assessment, while intended as an aid to a decision maker, became the de-facto decision!
The idea of this happening in criminal justice scares me no end!
On the post: North Dakota's New Anti-230 Bill Would Let Nazis Sue You For Reporting Their Content To Twitter
You say that
We'll see if their drivel can keep it's head above water, once every social media site gets deluged with free Ray-Bans, premium blue pills, and requests for money by Nigerian princes!
On the post: Techdirt Podcast Episode 270: Regulating The Internet Won't Fix A Broken Government
Heather's comparison of the online harms framework to health and safety law makes a lot of sense. Health and safety has long been a vector for culture-warrioring here in the UK.
Back when I was at university, there were various local controversies over licensing Houses in Multiple Occupation (HMO). HMO laws were brought in in response to a fatal fire in 2000, in which two students were killed. The rules set a licensing regime, designed to set minimum standards for shared houses (fire alarms, safety checks on gas appliances, etc).
Licenses were done by local councils. They were supposed to be granted based on "does this house meet the legal standards for a HMO?"
instead, virtually every HMO licensing in my university town was bombarded by complaints of students playing loud music. Students vomiting in gardens. Students besmerching the community by their presence. etc. And then these licenses being refused for completely illegitimate reasons, completely undermining the policy objectives of the HMO law!
I'm seeing the same dynamic play out when discussing online harms. No interested in coherent, workable policy objectives, addressing the overwhelming benefits of the free internet against measurable harms. Instead it's just a parade of horribles about teenage suicide, homophobic and racist bigotry, and paedophiles using Whatsapp. While having nothing to say about the vast, vast majority of law-abiding internet users, and how these restrictions would affect them!
On the post: Content Moderation Case Study: Twitter Removes Account Of Human Rights Activist (2018)
Ignore the trolls
I for one appreciate these case studies. They're a great resource to demonstrate what Mike's been saying for ages about the impossibility of perfect content moderation.
I hope you're going to cover the recent hoo-ha around GameStop shares. Including Robinhood (and other platforms) stopping trades. As well as Google suspending reviews over so-called "review bombing"
Moderation, with a side of possible financial market manipulation.
On the post: Content Moderation Case Study: Dealing With Demands From Foreign Governments (January 2016)
Another consideration
... Does your service have critical infrastructure, or staff located in the country concerned? Hopefully this had been risk-assessed in advance, given the trend of censorious regimes holding both of these hostage as part of takedown disputes.
Ultimately, you can't moderate to the lowest common denominator. And the Razak regime was particularly corrupt, thuggish and censorious.
A service that cares at all about free expression of ideas, is never going to sit well with a government that believes in stealing from it's own people, and censoring all their critics!
On the post: Sheryl Sandberg Makes Disingenuous Push To Argue That Only Facebook Has The Power To Stop Bad People Online
Re:
I'm increasingly convinced that Sandberg and Zuckerberg's relationship is akin to the relationship between Homelander and Madelyn Stillwell, in Amazon Prime series 'The Boys'.
(look it up. Also, NSFW)
On the post: Everything Pundits Are Getting Wrong About This Current Moment In Content Moderation
Re:
If the app can run in a web browser, which Parler can, it's not been locked off Android or iOS in any meaningful way.
Heck, most browsers even allow you to place bookmarks on your phone's home page. So you can even pretend it's a real app
On the post: For All The Hype, Trump's Favorite 'News' Channel (OAN) Faces Shrinking Footprint
I'll look forward to the inevitable "Big Cable' is censoring conservative viewpoints" talking points
Perhaps Fox and Friends will run a feature?
On the post: Twitter Suspended Cory Doctorow For Putting Trolls On A List Called 'Colossal Assholes'
Sunlight may be the best disinfectent
...but if you focus that sunlight through a magnifying glass it ends up burning anything under it's glare. To the point that everything looks like a rules violation.
What probably happened is triggered assholes decided they didn't much care for being called colossal assholes . They encouraged their credulous asshole followers (either explicitly, or by implication) to bombard Twitter with reports.
Twitter, seeing the scale of the reporting activity, starts looks overly closely at Cory's Tweets. They decide that the post technically violates the TOS in relation to "Harassment" "Trolling" or whatever other subjective buzzword appears in said TOS. The fact that a large volume of reports are coming in at the same time tends to back up this interpretation. Even though 99.9% of genuine users acting in good faith wouldn't view the post in this way.
Thereby, the mendacious assholes weaponize the TOS and achieve their objective of getting a prominent critic banned!
A similar thing happened last year to a British anti-racism activist I follow . They participated in a publicity stunt against prominent racist and football thug Tommy Robinson. The response was a mass flagging campaign against all his social media (along with trolling, death threats, and similar). His old tweets were trawled for anything that even vaguely violated Twitter's TOS. Each of which would never, on it's own, be worthy of being reported by most of his actual followers. But when all reported in such concentration, lead to a permanent twitter ban for the individual concerned!
The people calling for more severe social media moderation of assholes, often fail to understand that moderation often ends up utilized by said assholes as a tool of their assholery!
On the post: No, Filing A Defamation Lawsuit Is Never The Only Way You Can Clear Your Name
By way of example
Covington High School controversy. They followed two strategies in response to the cropped video of the "MAGA Hat boy"
First strategy; they posted the full video of the incident. They told their side of the story online and on TV(Admittedly not something most ordinary people get to do). And while some commentators changed tack to whine about "white privilege" and various tangential complaints (like rich kids affording a PR firm), the original accusations were pretty thoroughly debunked.
Second strategy: they sued the Washington Post for Defamation. Their complaint was dismissed straight away. The end.
Now tell me. Which of those strategies was more effective?
On the post: UK Forum Hands Out Public Records Request-Dodging Guidance To Over 100 Government Agencies
Re: Quick Question
Contingencies to deal with the absolute shit-show of a no-deal Brexit. Which will happen if the Govt, Parliament, and the rest of the EU can't agree a deal before ... checks notes 54 DAYS from now. (I need to stock up on canned food).
Among other things, we will have no legal framework under which to import or export goods to the rest of the EU. No legal framework to allow travel into our out of the rest of the EU. No legal framework to prevent Ireland from descending back into bloody violence when they try and impose a hard border. Etc. etc.
But it's all good! Among contingency planning efforts revealed so far, the government gave £13 million to a company called Seaborn Freight to set up a ferry service. A company which incorporated less than a year ago. And owns no ships. Or any assets of any description. And whose website's terms of service was stolen from a pizza delivery company....
it'll all be fine, i'm sure!
On the post: Foreign Stream-Ripping Site Wins Against Music Labels Based On Jurisdiction
A post-Machinima landscape.
And for a while, we might have agreed. "Well, I guess I don't really need a local copy. My hard drive has limited space after all. And it's basically all there on demand." It was a covenant of convenience.
Then this week, Machinima happened.
Nearly a decade of influential, meaningful, trailblazing video content. All of it memory-holed on the whim of some corporation, who had no hand in its creation.
So well done corporate America. Well done for transforming stream ripping from the bogeyman you claim it is, into the moral imperative to preserving internet culture!
On the post: UK Court: Guy Who Didn't Write Defamatory Tweet Needs To Pay $50,000 In Damages Because The Guy Who Did Doesn't Have Any Money
Re: Re: Re: How to get rich quick in the UK in four easy steps:
*In this context "something bad" read as "factually alleges that you're a paedophile, and a member of an organized child grooming gang"
Which is *clearly* Defamatory under either US, or E&W standards!
On the post: Google Says Our Article On The Difficulty Of Good Content Moderation Is... Dangerous
Re: Re: Remember:
On the post: Facebook Tells Cops Its 'Real Name' Policy Applies To Law Enforcement Too
Re: Doesn't the CFAA come into play?
Potentially it could be. Especially now Facebook has served a legal notice that this kind of bullshit violates their ToS. Some court cases seem to have held that accessing a system in violation of the terms of service constitutes "unauthorized access" for the purpose of the act.
I wouldn't be surprised if there's some kind of law enforcement exception though.
On the post: UK MP Thinks Secret Online Groups Are The Root Of All Evil Online, Promises To Regulate 'Large Online Groups'
Which is basically already the law!
Intermediary protections are much weaker than the USA. So While platforms benefit from some (limited) notice-and-takedown requirements, if they're notified of defamatory or illegal content they become responsible if it isn't taken down promptly.
What would this bill achieve, other than re stating the existing UK law, while being unenforceable against any services operating abroad?
On the post: Sensing Blood In The Water, All Major Labels Sue Cox For 'Ignoring' Their DMCA Notices
It's worth pointing out:
The "thirteen strike" policy was a rule on paper only, as pointed out by the article.
But in addition to that, the appeal court pointed out that Cox wasn't processing __Any__ of the notices that Rights Corp sent to it. They apparently took umbrage with the wording of the notices (which included "settlement agreements"). Rights Corp continued to send them unaltered. So Cox started filing all their notices straight to the recycling bin!
I might have issues with a lawyer. But if I get a valid legal notice from said lawyer, and decided to ignore it because they're an asshole, or they don't wash, I shouldn't be surprised if the court to takes a dim view.
On the post: UK Judge Says Accurate Journalism Is An Invasion Of Privacy In Cliff Richard Case
Re: Useful information
That's not new. Judges in the UK are required to conduct that exercise when balancing different convention rights. In this case, the right to privacy (under article 8 of the Human Rights Convention), versus the right to freedom of expression (under article 10).
In contrast to the US, where judges engaging in content-based analysis of speech is nearly anathama under the First Amendment.
On the post: UK Cops Threaten Facebook Users With Arrest After They Mock Department's Tiny Drug Bust
The law is actually even thinner than you say
But, subsections a) and b) that same paragraph specify that the nature of the message has to be either indecent, grossly offensive, a threat, or knowingly false information. Only if that's triggered does the 'distress and anxiety' part kick in.
Grossly offensive or indecent? Perhaps these police live in some puritanical age of history, but the idea of drug use being offensive (Grossly or otherwise) is fanciful. A threat? Please! False information? I don't know what planet they're on if they think a couple of roll-ups is a substantial drug bust; irrespective, it's clearly subjective opinion.
The real problem is the general attitude of the UK police to free speech. Sadly, there's little that can be done. While there's a potential human rights claim under Section 10 of the Human Rights Act, that law gives a wide (too wide) margin of appreciation, especially when it comes to law enforcement. One place where the USA is miles ahead of us :(
http://www.legislation.gov.uk/ukpga/1988/27/section/1
On the post: UK Police Use Zipcode Profiles, Garden Size And First Names For AI-Based Custody Decision System
The decision making tool inevitably becomes the decision-maker
The validity and accuracy of these assessments has been subject to intense criticism over the years, for reasons I won't go into now. But in theory, this assessment was only supposed to be one piece of evidence. The final decision rested with Department for Work and Pensions.
In reality, the decision-makers at the DWP are not empowered to deviate from the outcome of the assessment, irrespective of any independent medical evidence submitted by the claimant. So the assessment, while intended as an aid to a decision maker, became the de-facto decision!
The idea of this happening in criminal justice scares me no end!
Next >>