Be Careful What You Wish For: Demanding Platforms Delete Disinformation May Make It Harder To Understand What Happened
from the watch-out dept
Here's another one in the "be careful what you wish for" category. Over the last few years, under tons of pressure from politicians and many users, various internet platforms have gotten more and more aggressive in removing content and accounts that were credibly accused of spreading disinformation and propaganda. Most people cheered over this, and you can completely understand why. But, that doesn't mean it doesn't create some consequences that might not all be good. J.A. Guerrero-Saade points out that all of this content removal can make things harder for researchers and investigators.
It's great that @facebook and @Twitter shutdown disinformation accounts but given the public nature of the information posted and the acknowledgement of the creator's malicious intent, why not make this information available in archive form for researchers and investigators?
— J. A. Guerrero-Saade (@juanandres_gs) February 18, 2019
Denying operational capabilities shouldn't mean scrubbing any evidence of previous wrongdoing. These groups and efforts are often not made coherent until months and years after their operational viability has lapsed.
— J. A. Guerrero-Saade (@juanandres_gs) February 18, 2019
Moreover, much like some registrars have taken to stripping Domain Privacy for malicious domains, why not publish information about IPs and 'private interactions' originating from malicious accounts? Whose privacy are you protecting, exactly?
— J. A. Guerrero-Saade (@juanandres_gs) February 18, 2019
I'm reminded, somewhat, of all the demands in the past (and present) that these platforms need to take down "terrorist" content as quickly as possible. That ignores the fact that "terrorist" content can actually also be evidence of war crimes and atrocities that it might be useful for certain people to see. It also makes it that much more difficult for investigators -- whether government actors or open source investigative reporters -- to track down the perpetrators.
In response to this thread, Facebook's former Chief Security Officer, Alex Stamos, warned that he didn't think the research Facebook conducted over the past few years to find Russian disinfo spreaders would even be allowable under the GDPR:
Here is a conversation on archiving disinformation activity for research.
I’m not sure the work we did to find the Russian activity would be legal under GDPR. If you tell the companies to throw away data then you also excuse them from looking back to understand abuse. https://t.co/9ny2JrrJuu
— Alex Stamos (@alexstamos) February 18, 2019
This, as always, is the kind of thing we've been concerned about from the very beginning with the GDPR. There may be the best of intentions behind it, and "protecting privacy" always sounds good. But there are significant consequences to this, and demanding that private data be locked up, or that "bad" content be deleted as fast as possible, can have significant real world consequences that we might not like very much over the long haul.
Filed Under: alex stamos, deleting information, disinformation, gdpr, historical record, privacy, propaganda