The Conflict Between Social Media Transparency And Bad Privacy Laws Is Going To Get Worse
from the what-a-mess dept
For years I've been arguing that we're bad at regulating privacy because too many people think that privacy is a "thing" that needs to be "protected," rather than recognizing that privacy is always a series of tradeoffs. As I've pointed out a few times now, part of the problem that many people reasonably have about how internet companies are dealing with our "privacy" is the lack of transparency from those companies, making it difficult (or impossible) to accurately weigh the costs and benefits of the tradeoff choices.
It often comes down to a question of "is it worth sharing this data, in order to get this service." But to make that determination, it helps to know which data exactly, how it's being used, how it's being secured, what the likelihood is of it getting spread more widely and what the potential downstream impacts might be to me if that data does get spread more widely. If there was an accurate way to understand that, then we'd have a better sense of whether or not it's worth giving up that data in exchange for the service. But, many internet companies (from the big ones on down) are notoriously bad about providing that information, meaning that we can't make an informed decision about whether or not the tradeoffs are worth it.
And that's a big part of the reason users get so concerned whenever there's a privacy scandal. It's because that information wasn't provided to us. People didn't realize that Facebook would be enabling people to share the data of all of our friends with a sketchy corporation who might use it to suppress votes. People didn't realize that Facebook would take phone numbers provided for security purposes and use them to push advertisements and friend notifications to our phones.
But... since very few people seem to recognize that privacy is a "set of tradeoffs," too many of the regulations try to treat it as "a thing" and require companies to "protect" it -- even if that doesn't mean very much. And, even worse, trying to force companies to "protect" privacy can actually interfere with the necessary transparency that would allow individuals to better understand their privacy tradeoffs.
Case in point: a year and a half ago, Facebook agreed to support a new scholarly project to share a bunch of data with a bunch of academics in the interest of transparency. The project was dubbed Social Science One, and was funded by a bunch of big philanthropic foundations. Yet, a recent article in Buzzfeed points out that a year and a half later, Facebook still hasn't delivered most of the promised data to the waiting academics. Indeed, a follow up article notes that the big list of powerful foundations funding the whole project have now threatened to pull their funding if Facebook doesn't share the data by the end of September.
A key issue? Various attempts to regulate privacy.
A Facebook spokesperson acknowledged that some data originally promised will not be delivered, but said this is due to concerns about privacy, security, and the need to comply with regulations such as Europe’s GDPR privacy legislation. They said useful data has already been provided to researchers, and some teams are using it.
And this makes sense. Given how everyone (perhaps reasonably!) beat up on Facebook for sharing data in the past, you know that if it provides data for the Social Science One project, and somehow some of that data becomes public or is misused, Facebook will be vilified again (and, perhaps, held liable by various regulators. If it wants to provide transparency, it might have to violate privacy. But, of course, that does harm to our actual privacy, because then the public doesn't get to learn about the actual tradeoffs.
And, indeed, if anything bad happens with this data, you know that many media outlets will quickly blame Facebook, even if they're the same people now yelling about Facebook failing to deliver this data to researchers.
That's not to say that Facebook is entirely blameless here. It seems to have left everyone -- funders and academics alike -- pretty much in the dark over what was happening. That, alone, seems like a real lack of transparency on an issue where the company should have been very transparent. And some may rightly argue that there could be ulterior motives as well. If Facebook realized that this level of transparency was going to make the company look bad (as it very well might!) perhaps that's contributing to the stalling as well.
Either way, it's a no win situation. Facebook's failure to share the data after promising to, is a problem and it deserves to be criticized over it. At the same time, the privacy concerns are quite real as well. And, honestly, the very same journalists who are mocking Facebook over failing to live up to this promise, are likely the first in line who will attack the company should the data sharing lead to some sort of leak of private info.
This is why I'm so concerned with almost every current legislative approach to dealing with privacy. Over and over again it seems to set up conditions that are actually worse for real privacy and transparency, and which put the companies in much greater control. Setting up situations where you put transparency and privacy into conflict when they don't need to be, is not going to lead to good solutions for anyone. It puts both transparency and privacy at risk.
Filed Under: data sharing, gdpr, privacy, trade-offs, transparency
Companies: facebook, social science one