UK Now Calling Its 'Online Harms Bill' The 'Online Safety Bill' But A Simple Name Change Won't Fix Its Myriad Problems
from the this-is-not-how-this-should-work dept
We've talked a bit about the UK's long-running process to basically blame internet companies for all of society's ills. What was originally called the "online harms" bill has now officially morphed into the Online Safety Bill, which was recently released in draft form.
Despite the UK government insisting that it spent the past few years talking to various stakeholders, the final bill is a disaster for the open internet. Heather Burns, from the UK's Open Rights Group (loosely the UK's version of the EFF), has a short thread about the bill, explaining why it's so problematic. Here's the key bit:
You're going to read a lot today about the government's plans for the Online Safety Bill on #onlineharms, a regulatory process which has eaten up much of the past two years of my professional work. I suppose if I had a hot take to offer after two years, it's this:
- If you see the bill being presented as being about "social media" "tech giants" "big tech" etc, that's bullshit. It impacts *all services of all sizes, based in the UK or not. Even yours.* Bonus: take a drink every time a journo or MP says the law is about reining in Facebook.
- If you see the Bill being presented as being about children's safety, that's bullshit. It's about government compelling private companies to police the legal speech and behaviour of everyone who says or does anything online. Children are being exploited here as the excuse.
- So as you read the Bill, consider how altruistic any government initiative must be if it requires two layers of A/B tested messaging disinformation.
A week earlier Burns, who's been deeply engaged in the process in the UK wrote up a long blog post explaining all the problems with the fundamental approach embraced in the bill: it's basically outsourcing all of the roles of the government to internet companies, and then threatening to punish them if they get it wrong. Here's just one important bit:
The first and most immediate impact of the imposition of senior management liability will be a chilling effect on free speech. This is always a consequence of content moderation laws which are overly prescriptive and rigid, or conversely, overly vague and sweeping.
When everything falls into a legally ambiguous middle ground, but the law says that legally ambiguous content must be dealt with, then service providers find themselves backed into a corner. What they do in response is take down vast swathes of user-generated content, the majority of which is perfectly legal and perhaps subjectively harmful, rather than run the risk of getting it wrong.
This phenomenon, known as “collateral censorship” – with your content being the collateral – has an immediate effect on the right to freedom of expression.
Now add the risk of management liability to the mix, and the notion that tech sector workers might face personal sanctions and criminal charges for getting it wrong, and you create an environment where collateral censorship, and the systematic takedowns of any content which might cause someone to feel subjectively offended, becomes a tool for personal as well as professional survival.
In response to this chilling effect, anyone who is creating any kind of public-facing content whatsoever – be that a social media update, a video, or a blog post – will feel the need to self-censor their personal opinions, and their legal speech, rather than face the risk of their content being taken down by a senior manager who does not want to get arrested for violating a “duty of care”.
The general summary for tons of experts is that this bill is a dumpster fire of epic proportions. Big Brother Watch notes that this would introduce "state-backed censorship and monitoring on a scale *never seen before* in a liberal democracy." The scariest part is that it will require companies to remove lawful speech. The bill refers to it as "lawful but still harmful" (which some have taken to calling "lawful but awful" speech). But as noted above, that really creates tremendous incentives for excessive censorship and suppression of all sorts of speech to avoid falling on the wrong line.
Indeed, this is the very model used by the Great Firewall of China. For years, rather than instructing internet companies what to block with the Great Firewall, internet companies would often just get vague messages about what kinds of content the government was "concerned" about, along with threats that if the internet companies didn't magically block all of that content, they (and their executives) would face liability. The end result is clearly significant over-blocking. If you only get punished for under-blocking, the natural result is going to be over-blocking.
Among the many other problems with this, the UK's approach will only lead the Chinese to insist that this shows their Great Firewall approach is the only proper way to regulate the internet. They've certainly done that before.
It really is quite incredible how closely the bill mimics the Great Firewall approach, but with UK regulators OFCOM stepping in for the role of the Chinese government:
In an extraordinary undermining of basic democratic norms, Ofcom will have the power to issue guidance that direct social media companies on how to moderate their platforms.
If companies fail to comply there will be huge fines and even criminal sanction for senior managers. pic.twitter.com/wuPOZrDghr
— Matthew Lesh (@matthewlesh) May 12, 2021
There are a few attempts in the draft bill to put in place language that looks like they're supportive of free speech, but most of these are purely fig leaves -- the kind of thing they can point to in order to say "see, we support free speech, no censorship here, no siree" but which fail to take into account how these will work in practice.
Specifically, there's a section saying that websites (and executives), that will now face liability if they leave up too much "lawful but harmful" content, must make sure not to take down "democratically important" content. What does that mean? And who decides? Dunno. There's also a weird carveout for "journalists" but again, that's problematic, when you realize that merely the act of defining who is and who is not a journalism is a big free speech issue. And the bill does note that "citizen journalists will have the same protections as professional journalists." Does... that mean every UK citizen has to declare themselves a "citizen journalist" now? How does that even work?
The whole thing is not just a complete disaster, it's a complete disaster that tons of smart people have been warning the UK government about for the past two years without getting anywhere at all. I'm sure we'll have a lot more to say about it in the near future, but for now it really looks like the UK approach to "online harms"... er... "online safety" is to replicate the Chinese Great Firewall. And that's quite stunning.
Filed Under: censorship, china, free speech, intermediary liability, legal but harmful, online harms, online safety, uk