from the fake-news dept
When we last checked in with UK Parliament Member Damian Collins, he was creating fake news at a hearing he set up to scold Facebook for enabling fake news. If you don't recall, Collins held a very theatrical hearing, in which his big reveal was that Facebook had actually become aware of Russians hacking its API with billions of questionable requests back in 2014, years before anyone thought they were doing anything. Except, as became clear hours later, Collins completely misrepresented what actually happened. It wasn't Russians. It was Pinterest. And it wasn't billions of requests for data. It was millions. And it wasn't abusive or hacking. It was something going a little haywire on Pinterest's end. But, to Collins it was a smoking gun.
It appears that that little incident has not deterred Collins from his war on Facebook, in which he's using moral panic and fear mongering over "fake news" to try to censor content he doesn't like. Recently, Collins' committee -- the Digital, Culture, Media and Sport Committee -- published its big report on fake news, in which it calls for new regulatory powers to "oversee" what content goes on sites like Facebook. With the report, Collins put out quite the bombastic comment about all of this. Here's just a snippet:
“We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator."
There are all sorts of sketchy and dangerous ideas in this report, but I want to focus on the one that is most scary. A plan to regulate the amorphous concept of "harmful content."
The Report repeats a recommendation from the Interim Report for clear legal liabilities to be established for tech companies to act against harmful or illegal content on their sites, and the report calls for a compulsory Code of Ethics defining what constitutes harmful content. An independent regulator should be responsible for monitoring tech companies, backed by statutory powers to launch legal action against companies in breach of the code.
Companies failing obligations on harmful or illegal content would face hefty fines. MPs conclude: “Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites.”
It's notable that this says "harmful or illegal content," as it's an explicit recognition that what it considers "harmful" content might actually be perfectly legal. Except, apparently, it might be illegal for an internet company to host it. This is going to lead to serious problems.
Already, free speech experts are quite reasonably worried about this report. Index on Censorship posted a note warning how this will almost certainly be abused.
Despite a number of reports, including the government’s Internet Safety Strategy green paper, that have examined the issue over the past year, none have yet been able to come up with a definition of harmful content that goes beyond definitions of speech and expression that are already illegal. DCMS recognises this in its report when it quotes the Secretary of State Jeremy Wright discussing “the difficulties surrounding the definition.” Despite acknowledging this, the report’s authors nevertheless expect “technical experts” to be able to set out “what constitutes harmful content” that will be overseen by an independent regulator.
International experience shows that in practice it is extremely difficult to define harmful content in such a way that would target only “bad speech”. Last year, for example, activists in Vietnam wrote an open letter to Facebook complaining that Facebook’s system of automatically pulling content if enough people complained could “silence human rights activists and citizen journalists in Vietnam”, while Facebook has shut down the livestreams of people in the United States using the platform as a tool to document their experiences of police violence.
“It is vital that any new system created for regulating social media protects freedom of expression, rather than introducing new restrictions on speech by the back door,” said Index on Censorship chief executive Jodie Ginsberg. “We already have laws to deal with harassment, incitement to violence, and incitement to hatred. Even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public’s ability to hold the powerful to account.”
Somehow, it seems quite unlikely that Collins has any interest in paying attention. He has made it clear from early on that he's mad at Facebook, and no one's concerns about how his anger might lead to very, very bad policy that stifles free expression way beyond Facebook is going to make the slightest bit of difference apparently.
Index on Censorship also highlights the odd metrics used in Collins report -- such as pointing to a vast increase in the number of moderators Facebook has hired in Germany after its joke of a hate speech law went into effect. After pointing out problems with that law, it notes:
“The existence of more moderators is not evidence that the laws work,” said Ginsberg. “Evidence would be if more harmful content had been removed and if lawful speech flourished. Given that there is no effective mechanism for challenging decisions made by operators, it is impossible to tell how much lawful content is being removed in Germany. But the fact that Russia, Singapore and the Philippines have all cited the German law as a positive example of ways to restrict content online should give us pause.”
It appears clear that it will not give Damian Collins pause, because he's on a mission to grandstand against Facebook as much as he can, even if it means widespread censorship is the result.
Filed Under: damian collins, fake news, free speech, harmful content, regulations, social media, uk