UK's New 'Extremist Content' Filter Will Probably Just End Up Clogged With Innocuous Content
from the hashtags-or-no dept
The UK government has rolled out an auto-flag tool for terrorist video content, presumably masterminded by people who know it when they (or their machine) see it and can apply the "necessary hashtags." The London firm behind it is giving its own product a thumbs-up, vouching for its nigh invincibility.
London-based firm ASI Data Science was handed £600,000 by government to develop the unnamed algorithm, which uses machine learning to analyse Daesh propaganda videos.
According to the Home Office, tests have shown the tool automatically detects 94 per cent of Daesh propaganda with 99.995 per cent accuracy.
The department claimed the algorithm has an "extremely high degree of accuracy", with only 50 out of a million randomly selected videos requiring additional human review.
This tool won't be headed to any big platforms. Most of those already employ algorithms of their own to block extremist content. The Home Office is hoping this will be used by smaller platforms which may not have the budget or in-house expertise to pre-moderate third party content. They're also hoping it will be used by smaller platforms that have zero interest in applying algorithmic filters to user uploads because it's more likely to anger their smaller userbase than bring an end to worldwide terrorism.
The Home Office's hopes are only hopes for the moments. But if there aren't enough takers, it will become mandated reality.
[Amber] Rudd told the Beeb the government would not rule out taking legislative action "if we need to do it".
In a statement she said: "The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this, can heavily disrupt the terrorists' actions, as well as prevent people from ever being exposed to these horrific images."
Is such an amazing tool really that amazing? It depends on who you ask. The UK government says it's so great it may not even need to mandate its use. The developers also think their baby is pretty damn cute. But what does "94% blocking with 99.995% accuracy" actually mean when scaled? Well, The Register did some math and noticed it adds up to a whole lot of false positives.
Assume there are 100 Daesh videos uploaded to a platform, among a batch of 100,000 vids that are mostly cat videos and beauty vlogs. The algorithm would accurately pick out 94 terror videos and miss six, while falsely identifying five. Some people might say that's a fair enough trade-off.
But if it is fed with 1 million videos, and there are still only 100 Daesh ones in there, it will still accurately pick out 94 and miss six – but falsely identify 50.
So if the algorithm was put to work on one of the bigger platforms like YouTube or Facebook, where uploads could hit eight-digit figures a day, the false positives could start to dwarf the correct hits.
This explains the government's pitch (the one with latent legislative threat) to smaller platforms. Fewer uploads mean fewer false positives. Larger platforms with their own software likely aren't in the market for something government-made that works worse than what they have.
Then there's the other problem. Automated filters, backed by human review, may limit the number of false positives. But once the government-ordained tool declared something extremist content, what are the options for third parties whose uploaded content has just been killed? There doesn't appear to be a baked-in appeals process for wrongful takedowns.
"If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake? It may be too complicated for the small company," said Jim Killock, director of the Open Rights Group.
"If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process."
For now, it's a one-way ride. Content deemed "extremist" vanishes and users have no vehicle for recourse. Even if one were made available, how often would it be used? Given that this is a government process, rather than a private one, wrongful takedowns will likely remain permanent. As Killock points out, no one wants to risk being branded as a terrorist sympathizer for fighting back against government censorship. Nor do third parties using these platforms necessarily have the funds to back a formal legal complaint against the government.
No filtering system is going to be perfect, but the UK's new toy isn't any better than anything already out there. At least in the case of the social media giants, takedowns can be contested without having to face down the government. It's users against the system -- something that rarely works well, but at least doesn't add the possibility of being added to a "let's keep an eye on this one" list.
And if it's a system, it will be gamed. Terrorists will figure out how to sneak stuff past the filters while innocent users pay the price for algorithmic proxy censorship. Savvy non-terrorist users will also game the system, flagging content they don't like as questionable, possibly resulting in even more non-extremist content being removed from platforms.
The UK government isn't wrong to try to do something about recruitment efforts and terrorist propaganda. But they're placing far too much faith in a system that will generate false positives nearly as frequently as it will block extremist content.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: algorithms, censorship, extremist content, filters, terrorism, uk
Reader Comments
Subscribe: RSS
View by: Time | Thread
At this point I don't think they need. Their hate messages are all over the media and their censorious, tyrannical ways are being mimicked by governments everywhere. Terrorists have won. This is just their victory becoming more complete and established.
[ link to this | view in chronology ]
This looks suspicious
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Well... Isn't that the point?
[ link to this | view in chronology ]
Point
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Problem turned out to be that people are very inventive when it comes to saying something without actually saying it. Every one knows what it means but the precise trigger words aren't there.
No matter how good the algorithm, it will miss the implied but never actually said content. Everyone will know what is meant without hitting those same trigger words.
Another rush for fixing things that will never work.
[ link to this | view in chronology ]
Re:
That's not stopping the censors, human or automated. With people criticizing the removal of term limits to make President Xi emperor for life, censored term now include:
Winnie being a reference to Xi.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Apparently China’s dystopian future is just Disney’s Toontown.
It’s funny how often “save the children” is subject to mission creep.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Where to even begin
Next up is the false positive failure, already discussed at length. And the lack of transparency in the algorithm itself, the human review process and the apparently non-existent appeal process.
We can segue into what the uses of this program will be next? Hate speech? Political speech? Civil Rights speech? If only it could be taught to take down videos of cats playing piano.
I'd say it's a solution in search of a problem, but it's not at all clear it's even a solution. At least it was only 600K british pounds, so it's not like it was real money (that's a comment on the 600K, not the british pounds!)
[ link to this | view in chronology ]
While tat figure sounds good, and is good enough to fool a politician, it does not quantify how many false positives there are in the videos that it would automatically take down, nor does it give any clue as to the content that required review. What needed review, documentation of war crime, video game footage, or was it the odd cat video.
[ link to this | view in chronology ]
dear google/yt
Here is your chance..
This company has NOW, Volunteered..
Send them 1 days Videos and see how long this 'LARGE' GROUP takes to scan all of that days Videos..
[ link to this | view in chronology ]
C'mon man! That's half the crap on the internet, Deash doesn't have a lock on trying to rev people up to do violence and create terror. Priorities man, don't lose the forest for the trees.
[ link to this | view in chronology ]
Why would a terrorist bother?
Not hardly. Why would a terrorist bother to work around it?
The "theory" was that the terrorists were using these uploads to recruit, but that was always a crock of s**t. Because of the criminal aspects, they've always needed to get their recruits face-to-face...and always will.
No, this is just a censorship end run. A way for those in power to crush their idea of "extremism"; AKA "any political statement we don't like."
[ link to this | view in chronology ]
Re: Why would a terrorist bother?
Its asif, someone is PUSHING this nation to a Fascist-religion backed Ideals..
THIS IS NOT a Puritan country..
wITH OVER 40 dIFF GROUPS CALLING THEMSELVES christian...who do you want to follow..Which rules?
To many Laws going around, that do Nothing,, except some odd thing in the back ground.. Might as well be a Corp Contract 300 pages long that ref. this page to that page to another page to get the meaning and context of 1 simple line..
[ link to this | view in chronology ]