This Is A Really Bad Idea: Facebook, Twitter, YouTube & Microsoft Agree To Block 'Terrorist' Content
from the how's-that-going-to-work dept
Under increasing pressure from overreacting and fearful bureaucrats, it seems that the big social media companies -- Facebook, Twitter, YouTube and Microsoft -- have all agreed to block "terrorist" content and will agree to share hashed versions of it among the other companies so something blocked on one site can easily be blocked across them all.Facebook, Microsoft, Twitter and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.This sounds as though it's modeled on similar arrangements around child pornography. Except that there are some major differences between child pornography and "terrorist content." The first is that child porn is de facto illegal. "Terrorist content" is quite frequently perfectly legal. It's also much more of a judgment call. And based on this setup, allowing one platform partner to designate certain content as "bad" will almost certainly result in false positive designations that will flow across multiple platforms. That's dangerous.
Starting today, we commit to the creation of a shared industry database of “hashes” — unique digital “fingerprints” — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.
Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services — content most likely to violate all of our respective companies’ content policies. Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.
As we've discussed in the past, when you tell platforms to block "terrorist" content, it will frequently lead to mistakes, like blocking humanitarians documenting war atrocities. That kind of information is not just valuable, but necessary in understanding what's happening.
Also, all of this presumes, that the best way to deal with so-called "terrorist content" online is to hide it and pretend it doesn't exist. That's not always the case. As we've noted, counterspeech -- including mocking silly terrorist claims -- is often much more effective than outright blocking. Blocking the content not only leads to a slippery slope -- and open questions on choosing what content stays and what content goes -- but also presumes that the block is the most effective way to stop the bad behavior associated with terrorists. But it leaves out that blocking such content often only makes those posting it feel like they're on the right path, and that they're saying something "so true" that it needs to be blocked. It's not a path towards stopping terrorism or the spread of terrorist ideology -- it just gets those engaged to dig in deeper on their views.
On top of that, terrorist information posted to social media is often a great source of intelligence for law enforcement. Even the FBI director has said it's silly to chase terrorists off of social media, because it makes them harder to track. So what good is this really doing?
Yes, platforms have every right to decide how they want to handle the content submitted to them. And, yes, this almost certainly comes about as a result of increasing pressure (especially out of the EU) to "do something" about "terrorist content" on these platforms, but as we've seen in the past, appeasing such whining bureaucrats almost never settles them down. As we recently noted, after these same four companies signed an agreement earlier this year to "curb hate speech" on their platforms, it still didn't stop government officials in Europe from threatening further legal consequences, including criminal charges, when the agreed upon blocks failed to magically make all "hate speech" disappear.
So, yes, the platforms may have felt backed into a corner, but they're only going to get their backs pushed further and further into that corner -- and the collateral damage it creates may be even more massive.
Filed Under: blocking, social media, terrorist content
Companies: facebook, microsoft, twitter, youtube