from the we're-bad-at-this dept
It's been an interesting year for those of us who support strong privacy for the public -- in part because we've seen lots of movement on attempts to regulate privacy. However, you may have noticed that we've also regularly criticized almost every attempt to regulate privacy. We've been highly critical of the GDPR, Europe's big privacy regulation that is impacting basically every website globally. And we were even more critical of California's disaster of a privacy bill, that was a rush job with tremendous problems. And now that the news has come out that the White House is working on a domestic version of the GDPR (perhaps in an attempt to preempt California and other states from making a mess of things) we should, perhaps, clarify why nearly all attempts at regulating privacy, are likely to be complete disasters.
And I know that many people who advocate in favor of privacy issues are supportive of at least some aspects of these bills. And I completely understand where they're coming from. So let's set some parameters: privacy is incredibly important -- and it's something that is often undervalued by those services that collect other people's private information, and a failure to protect privacy can have massive, life-changing consequences. But, I believe that almost everyone is confused about what privacy really is. We've discussed this a few times before, but I think it's important to recognize that the more we fail to properly understand privacy, the more likely it is that every attempt to regulate it will fail badly, often creating significantly bad consequences that will do a lot more harm than good. That doesn't mean we shouldn't protect privacy, however, and towards the end of this post, I'll suggest a better path forward on that front.
The basic issue is this: privacy is not a "thing," it's a trade-off. Yet, nearly all attempts to regulate privacy treat it as a thing -- a thing that needs "protecting." As such, you automatically focus on regulating "how do we protect this thing" which generally means prohibitions on sharing information or data, or even being willing to delete that data. But, if we view privacy that way, we also lose out on all sorts of situations where someone could benefit greatly from sharing that data, without the downside risks. When I say privacy is a trade-off I mean it in the following way: almost everything we do can involve giving up some amount of private information -- but we often choose to do so because the trade-off is worthwhile.
For example, leaving my house to go grocery shopping involves a trade-off in privacy. Someone could see me and recognize me, and could figure out certain pieces of information about me: what I shop for, what I eat, perhaps generally where I live, and the fact that I'm not home at that moment. They might also be able to spot what kind of car I drive, or divine other information about me from the things that they see me buying. That's all "private" information that is in some way exposed. Now, for most of us, we consider this trade-off worth it. First of all, the potential downside risk is extremely low. We doubt most people would recognize or care who we are, and we doubt that anyone who does so would glean information from this that could be used abusively. Also, the benefits are pretty high (we get the stuff we need). There are scenarios under which that might change (for example, this is why many top celebrities don't do their own grocery shopping -- the privacy "cost" to them is much higher, and thus the trade-off equation is different).
When we move into the digital world, once again, the issue that many people have is that this trade-off equation is a lot more of a gray area, and it makes people uncomfortable. In the grocery example above, for most people it's an easy call: the benefits outweigh the costs by a very large measure. When we talk about online services, what makes some people nervous is that this isn't as clear. And it's unclear for a number of important reasons: the risk of abuse is not clear, so we don't have as good an understanding of the potential costs as doing something like grocery shopping. Similarly, many of the costs appear "hidden" in that online services aren't completely upfront about what data they're collecting on us and what they're doing with it. The benefits still seem to be there -- otherwise why would people be using these services so much? -- but the trade-off equation includes a lot of guesses and uncertainty.
On top of that, we've definitely seen a few cases of information abuse or misuse -- though most of that has been around data breaches, identity fraud or credit card fraud. But, the potential downsides seem much more serious.
And thus, when we're dealing with services online, we're left in a situation that has many people reasonably nervous. And it's not because our privacy is lost or being abused but that we don't have a good sense of the risk of such abuse and thus we can't accurately gauge the the cost side of the equation (we similarly may have more difficulty measuring the benefits side, but that's perhaps less of a big deal here).
When we regulate privacy as a "thing," rather than a "trade-off," however, we end up cutting off many possibilities where people would actually be perfectly happy to trade some information for some larger benefit. This leads to things like rules and restrictions on what kind of information companies can even ask to use in offering services. Even worse, it often leads to rules that give companies who are holding our data even greater control over that data, by including "responsibilities" that actually serve to increasing the power of the companies over the users.
But there are better ways of dealing with all of this, starting with recognizing the idea that privacy is a trade-off. If that's the case, there should be two key concepts for any competent approach to privacy: transparency and user control. As discussed above, many of the problems today (and nearly all of the concerns) are over the lack of transparency. This impacts both the cost and the benefit sides of the equation. If we don't understand what data is being collected or what it's being used for (or how it's being stored), along with what actual benefits we're getting, it's much, much more difficult to make an informed decision about whether or not the trade-off is worth it. And the issue of control is connected to that, in that the more control end users have over their own data, the more they're able to make informed choices in weighing the costs and benefits.
Now, much of the problem here comes from various companies themselves, who for a variety of reasons have decided it's better to have less transparency and less user control involved. Perhaps it's because they feel that if people know the actual costs and benefits, they'll decide it's not worth it. Perhaps it's because it's difficult to provide both the transparency and control that is necessary to make informed decisions. Perhaps they're afraid that transparency and control might also create unnecessary friction leading to poor choices. It's likely to be some combination of these along with multiple other factors as well.
However, when many of the regulatory aspects focus on the "requirements" for companies using data, it often serves to harm the abilities of users to actually control their data. Yes, it may create opportunities for users to delete all of their data as held by a service, but "delete/not delete" is a very crude level of control. A more ideal world might be one where users have a form of a "data bank" which they control, and where they know what data is in there. And, if they want to use a service, that service could explain what data it needs, why it needs that data, and for how long it would like to access it. Then, the user can make a more informed choice, better weighing the trade-offs, and decide whether it will allow access to the data for that purpose, or if it wishes to somehow offer an alternative agreement.
Unfortunately, very few of these "privacy regulations" move us towards such a world, where there is greater transparency and end-user control. Instead, they mostly focus on putting onerous and often extraneous and unnecessary requirements on services to better "protect" data. And, again, all that does is increase their power, limit competition and limit the ability of new services to appear that do provide more transparency and control.
So every time we see new stories about privacy regulations, think about whether or not they'd lead to a world in which end users have more control and more transparency, or if they really seem designed to just put up enough roadblocks that only the largest companies can handle them... and which will likely lock our data even more tightly within those giant entities.
Filed Under: control, cost-benefit, gdpr, privacy, regulating privacy, tradeoffs, transparency