UK's New 'Extremist Content' Filter Will Probably Just End Up Clogged With Innocuous Content

from the hashtags-or-no dept

The UK government has rolled out an auto-flag tool for terrorist video content, presumably masterminded by people who know it when they (or their machine) see it and can apply the "necessary hashtags." The London firm behind it is giving its own product a thumbs-up, vouching for its nigh invincibility.

London-based firm ASI Data Science was handed £600,000 by government to develop the unnamed algorithm, which uses machine learning to analyse Daesh propaganda videos.

According to the Home Office, tests have shown the tool automatically detects 94 per cent of Daesh propaganda with 99.995 per cent accuracy.

The department claimed the algorithm has an "extremely high degree of accuracy", with only 50 out of a million randomly selected videos requiring additional human review.

This tool won't be headed to any big platforms. Most of those already employ algorithms of their own to block extremist content. The Home Office is hoping this will be used by smaller platforms which may not have the budget or in-house expertise to pre-moderate third party content. They're also hoping it will be used by smaller platforms that have zero interest in applying algorithmic filters to user uploads because it's more likely to anger their smaller userbase than bring an end to worldwide terrorism.

The Home Office's hopes are only hopes for the moments. But if there aren't enough takers, it will become mandated reality.

[Amber] Rudd told the Beeb the government would not rule out taking legislative action "if we need to do it".

In a statement she said: "The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this, can heavily disrupt the terrorists' actions, as well as prevent people from ever being exposed to these horrific images."

Is such an amazing tool really that amazing? It depends on who you ask. The UK government says it's so great it may not even need to mandate its use. The developers also think their baby is pretty damn cute. But what does "94% blocking with 99.995% accuracy" actually mean when scaled? Well, The Register did some math and noticed it adds up to a whole lot of false positives.

Assume there are 100 Daesh videos uploaded to a platform, among a batch of 100,000 vids that are mostly cat videos and beauty vlogs. The algorithm would accurately pick out 94 terror videos and miss six, while falsely identifying five. Some people might say that's a fair enough trade-off.

But if it is fed with 1 million videos, and there are still only 100 Daesh ones in there, it will still accurately pick out 94 and miss six – but falsely identify 50.

So if the algorithm was put to work on one of the bigger platforms like YouTube or Facebook, where uploads could hit eight-digit figures a day, the false positives could start to dwarf the correct hits.

This explains the government's pitch (the one with latent legislative threat) to smaller platforms. Fewer uploads mean fewer false positives. Larger platforms with their own software likely aren't in the market for something government-made that works worse than what they have.

Then there's the other problem. Automated filters, backed by human review, may limit the number of false positives. But once the government-ordained tool declared something extremist content, what are the options for third parties whose uploaded content has just been killed? There doesn't appear to be a baked-in appeals process for wrongful takedowns.

"If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake? It may be too complicated for the small company," said Jim Killock, director of the Open Rights Group.

"If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process."

For now, it's a one-way ride. Content deemed "extremist" vanishes and users have no vehicle for recourse. Even if one were made available, how often would it be used? Given that this is a government process, rather than a private one, wrongful takedowns will likely remain permanent. As Killock points out, no one wants to risk being branded as a terrorist sympathizer for fighting back against government censorship. Nor do third parties using these platforms necessarily have the funds to back a formal legal complaint against the government.

No filtering system is going to be perfect, but the UK's new toy isn't any better than anything already out there. At least in the case of the social media giants, takedowns can be contested without having to face down the government. It's users against the system -- something that rarely works well, but at least doesn't add the possibility of being added to a "let's keep an eye on this one" list.

And if it's a system, it will be gamed. Terrorists will figure out how to sneak stuff past the filters while innocent users pay the price for algorithmic proxy censorship. Savvy non-terrorist users will also game the system, flagging content they don't like as questionable, possibly resulting in even more non-extremist content being removed from platforms.

The UK government isn't wrong to try to do something about recruitment efforts and terrorist propaganda. But they're placing far too much faith in a system that will generate false positives nearly as frequently as it will block extremist content.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: algorithms, censorship, extremist content, filters, terrorism, uk


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Ninja (profile), 28 Feb 2018 @ 9:27am

    "Terrorists will figure out how to sneak stuff past the filters while innocent users pay the price for algorithmic proxy censorship."

    At this point I don't think they need. Their hate messages are all over the media and their censorious, tyrannical ways are being mimicked by governments everywhere. Terrorists have won. This is just their victory becoming more complete and established.

    link to this | view in thread ]

  2. icon
    hij (profile), 28 Feb 2018 @ 9:42am

    This looks suspicious

    Hmmm, complaining about arbitrary snooping done by law enforcement. This site needs a button so that this post can be flagged for its content and scanned by the proper authorities. The problem is that this is Wednesday, and I forgot whether or not the current administration approves of law enforcement today. I may have to wait for tomorrow.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 28 Feb 2018 @ 9:49am

    this is the sort of thing that happens when you have a person like Rudd in charge who is more interested in making a name for herself than actually doing something constructive and useful! like all politicians, nothing is more important than them!!

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 28 Feb 2018 @ 10:06am

    Well... Isn't that the point?

    The entire purpose of a content filter is to do just that! It does not care if it is parody or not.

    link to this | view in thread ]

  5. icon
    JoeCool (profile), 28 Feb 2018 @ 10:15am

    Point

    Can the figures given be trusted? After all, this is from a company trying to sell something to a stupid and gullible government. Even then, it's statistics, and as the saying goes, there's lies, damn lies, and statistics.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 28 Feb 2018 @ 10:15am

    More bandaids, can we get those ones with the cute animals on them?

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 28 Feb 2018 @ 10:17am

    China had this same great idea. Only they use people to monitor the flow of data across the net looking for subversive, wrong think, data. Now supposedly people are better than some software to ID and make such goals achievable.

    Problem turned out to be that people are very inventive when it comes to saying something without actually saying it. Every one knows what it means but the precise trigger words aren't there.

    No matter how good the algorithm, it will miss the implied but never actually said content. Everyone will know what is meant without hitting those same trigger words.

    Another rush for fixing things that will never work.

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 28 Feb 2018 @ 10:23am

    Just how long will it be before they expand the type of content that is automatically taken down?

    link to this | view in thread ]

  9. icon
    Andrew (profile), 28 Feb 2018 @ 10:23am

    Where to even begin

    I guess the place to begin is the *claim* that it is 94% effective with 99.995% accuracy. What's the background of ASI Data Sciences in doing research like this? What are the actual numbers? What was the test methodology? Was the training set composed solely of people being beheaded and cats playing piano?

    Next up is the false positive failure, already discussed at length. And the lack of transparency in the algorithm itself, the human review process and the apparently non-existent appeal process.

    We can segue into what the uses of this program will be next? Hate speech? Political speech? Civil Rights speech? If only it could be taught to take down videos of cats playing piano.

    I'd say it's a solution in search of a problem, but it's not at all clear it's even a solution. At least it was only 600K british pounds, so it's not like it was real money (that's a comment on the 600K, not the british pounds!)

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 28 Feb 2018 @ 10:52am

    The department claimed the algorithm has an "extremely high degree of accuracy", with only 50 out of a million randomly selected videos requiring additional human review.

    While tat figure sounds good, and is good enough to fool a politician, it does not quantify how many false positives there are in the videos that it would automatically take down, nor does it give any clue as to the content that required review. What needed review, documentation of war crime, video game footage, or was it the odd cat video.

    link to this | view in thread ]

  11. icon
    Roger Strong (profile), 28 Feb 2018 @ 11:11am

    Re:

    Problem turned out to be that people are very inventive when it comes to saying something without actually saying it. Every one knows what it means but the precise trigger words aren't there.

    That's not stopping the censors, human or automated. With people criticizing the removal of term limits to make President Xi emperor for life, censored term now include:

    • I don't agree
    • migration
    • emigration
    • re-election
    • election term
    • constitution amendment
    • constitution rules
    • proclaiming oneself an emperor
    • Winnie the Pooh

    Winnie being a reference to Xi.

    link to this | view in thread ]

  12. icon
    orbitalinsertion (profile), 28 Feb 2018 @ 11:18am

    Re:

    All of it.

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 28 Feb 2018 @ 11:28am

    Re: Re:

    In China's dystopian future, browsers will have drop down boxes containing government approved phrases to use in text messages, manual typing is illegal with severe punishment - they will use a tank to run over your fingers.

    link to this | view in thread ]

  14. icon
    takitus (profile), 28 Feb 2018 @ 11:43am

    Re: Re: Re:

    Apparently China’s dystopian future is just Disney’s Toontown.

    It’s funny how often “save the children” is subject to mission creep.

    link to this | view in thread ]

  15. icon
    ECA (profile), 28 Feb 2018 @ 12:33pm

    dear google/yt

    You may find it Nie impossible to do all the watching of content and restricting Things NOT ALLOWED..

    Here is your chance..
    This company has NOW, Volunteered..
    Send them 1 days Videos and see how long this 'LARGE' GROUP takes to scan all of that days Videos..

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 28 Feb 2018 @ 12:36pm

    "The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society."

    C'mon man! That's half the crap on the internet, Deash doesn't have a lock on trying to rev people up to do violence and create terror. Priorities man, don't lose the forest for the trees.

    link to this | view in thread ]

  17. icon
    Coyne Tibbets (profile), 28 Feb 2018 @ 3:04pm

    Why would a terrorist bother?

    "Terrorists will figure out how to sneak stuff past the filters while innocent users pay the price for algorithmic proxy censorship."

    Not hardly. Why would a terrorist bother to work around it?

    The "theory" was that the terrorists were using these uploads to recruit, but that was always a crock of s**t. Because of the criminal aspects, they've always needed to get their recruits face-to-face...and always will.

    No, this is just a censorship end run. A way for those in power to crush their idea of "extremism"; AKA "any political statement we don't like."

    link to this | view in thread ]

  18. icon
    ECA (profile), 28 Feb 2018 @ 10:42pm

    Re: Why would a terrorist bother?

    That I can see..
    Its asif, someone is PUSHING this nation to a Fascist-religion backed Ideals..
    THIS IS NOT a Puritan country..
    wITH OVER 40 dIFF GROUPS CALLING THEMSELVES christian...who do you want to follow..Which rules?

    To many Laws going around, that do Nothing,, except some odd thing in the back ground.. Might as well be a Corp Contract 300 pages long that ref. this page to that page to another page to get the meaning and context of 1 simple line..

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.