from the that's-not-how-privacy-works dept
We've talked for a while now about how we're really bad at regulating privacy because most people don't really understand privacy. People tend to think of it as "a thing." But, it's not. It's a set of trade-offs that can change depending on who is involved, what the context is, and the terms of the trade-off. The example we've used many times is that of leaving your home to buy groceries. Doing so entails giving up some amount of privacy. Someone could see you. They might even see what's in your shopping cart. But for most people, this trade-off is worth it. The "loss" of privacy here is minimal. The "damage" of someone seeing that you're buying broccoli is not that big of a deal. But, for some people, the trade-off may be quite different. If you're a movie star, for example, going into a grocery store may represent a huge burden and an impact on your privacy. Paparazzi may follow you around. Other customers may bug you. What you buy may be analyzed or mocked or worse. Other factors come into play as well, such as what it is that you're buying. Vegetables might not be that big a deal. Other items may be a lot more revealing.
That may be a fairly simple view of things, but it applies in lots of cases. Lots of decisions we make involve basic trade-offs regarding privacy. And part of the calculation that we all implicitly make involves a fairly straightforward cost-benefit analysis. Is the value we get from doing x greater than the potential privacy violation? And, of course, this is often made more difficult by the "cost" being one in which somewhat opaque probabilities come into play. Beyond the potential "cost" of such "private" information being revealed, what is the probability that such a revelation will lead to greater costs? For example, someone going into a drug store to buy condoms may represent a slight loss in privacy -- but if that person is doing so to have an affair, then the "cost" might be the probability that the person's partner becomes aware of such a purchase.
The issue is that thinking of privacy as just "a thing" that must be "protected" often fails to take into account the various nuances of the trade-offs. It fails to account for the fact that different people in the same situations may value the different sides of the trade-off differently and may have entirely different beliefs about what is and what is not an acceptable trade-off.
Building on that, the real problem we have today concerning "privacy" is that we often don't know enough about both sides of the trade-off equation. The concern or unease that some have over internet companies sucking up our data is that it's not entirely clear (1) what the ultimate benefit is of that and (2) it's very unclear what the costs are -- or what the probability is that the costs will be extreme. There's not much transparency and not much ability to have an accurate sense of the actual risks, and, therefore, we're often making the trade-off decision somewhat blind. There are lots of people who -- via their own expressed preferences in terms of what they actually do -- seem to think that letting Facebook suck up all their surfing data is a worthwhile trade for staying in touch with family and friends. Some argue that they're ignorant for doing so -- and maybe that's true. But part of the problem is that the costs are amorphous, at best, while the benefits to many seem worth it.
Still, the lack of transparency about what data is being collected, and how it's being used, combined with the lack of control for the end users, creates a totally reasonable level of nervousness for some. The issue is that the cost might be super high. But we don't know and we don't really have any way to do anything about it if that turns out to be the case. That's where most of the fear about social media's impacts on privacy come from.
Given that there's so much interest these days in "regulating" privacy, the models that people use to understand privacy can have a really big impact. Using the wrong model will lead to really bad regulations. And one of the worst ideas is unfortunately super popular: the idea of turning "privacy" into a quasi-intellectual property. Specifically trying to set it up as if it's a "property" right with a price attached to it. Tragically, this model has a bunch of proponents when it comes to regulations. The NY Times recently had an excellent opinion piece by Sarah Jeong explaining why setting up privacy as a property right is a terrible idea. That NY Times opinion piece came out just a week or so after a similar (and even more thorough) article at Brookings by Cam Kerry and John Morris similarly explaining why data ownership is "the wrong approach" to protecting privacy.
As both pieces note, there are lots of regulatory attempts to put a property right and price on private info:
Some policymakers are taking such thinking to heart. Senator John Kennedy (R-LA) introduced a three-page bill, the âOwn Your Own Data Act of 2019,â which declares that âeach individual owns and has an exclusive property right in the data that individual generates on the internetâ and requires that social media companies obtain licenses to use this data. Senators Mark Warner (D-VA) and Josh Hawley (R-MO) are filing legislation to require Facebook, Google, and other large collectors of data to disclose the value of personal data they collect, although the bill would not require payments. In California, Governor Gavin Newsome wants to pursue a âdata dividendâ designed to âshare in the wealth that is created from [peopleâs] data.â
But as the Kerry/Morris piece notes, treating private data in this manner runs into all sorts of problems -- including conflicting with the First Amendment:
The trouble is, itâs not your data; itâs not their data either. Treating data like it is property fails to recognize either the value that varieties of personal information serve or the abiding interest that individuals have in their personal information even if they choose to âsellâ it. Data is not a commodity. It is information. Any system of information rightsâwhether patents, copyrights, and other intellectual property, or privacy rightsâpresents some tension with strong interest in the free flow of information that is reflected by the First Amendment. Our personal information is in demand precisely because it has value to others and to society across a myriad of uses.
Treating personal information as property to be licensed or sold may induce people to trade away their privacy rights for very little value while injecting enormous friction into free flow of information. The better way to strengthen privacy is to ensure that individual privacy interests are respected as personal information flows to desirable uses, not to reduce personal data to a commodity.
Jeong's piece, similarly, highlights just a few of the problems of treating privacy as a property right:
Legally vesting ownership in data isnât a new idea. Itâs often been kicked around as a way to strengthen privacy. But the entire analogy of owning data, like owning a house or a car, falls apart with a little scrutiny.
A property right is alienable â once you sell your house, itâs gone. But the most fundamental human rights are inalienable, often because the rights become meaningless once they are alienable. Whatâs the point of life and liberty if you can sell them?
Jeong also makes an important point: that your "private" data may actually be someone else's private data as well, raising some significant complications:
Your location data can give away the whereabouts of your spouse. Your health records give away information about your biological children. If you sell your genetic privacy, are your parents entitled to a percentage?
Do you need to get joint approval with your spouse to sell location info? Do you need permission of your not-yet-born descendants to sell your genetic info?
The Kerry/Morris piece also highlights how useful it is, in general, for certain information to be shared, sans pricing, for society as a whole. Putting a price on all data would put an unnecessary bit of friction into fairly basic exchanges that no one should have an issue with (in my discussion above, those would be cases where the "cost" is very low, and the benefit much higher). Switching everything to a commodity/property model would come with pretty significant costs, however. And many of them would do little to actually protect "privacy."
Basing privacy protection on property systems, on the other hand, would reduce privacy to a commodity, double down on a transactional model based on consumer choice, and be enormously complicated to implement. The current notice-and-choice model is failing because it is effectively impossible for users to understand either how their data will be used or the accompanying privacy risks, especially in the constant flow of online engagement in todayâs connected world. The result is that people click past privacy notices through to the information or service they want.
Moreover, many of these consumers already agree to provide personal information in exchange for the perception a benefit. It is hard to imagine people will burrow deeper into privacy disclosures or pause at clicking through to get at communications or transactions simply because they are offered what may amount to a few pennies. It is far from clear that in a market for data, the ordinary user would come out on topâeither in relation to economic benefits or privacy harms. On the contrary, by licensing the use of their information in exchange for monetary consideration, they may be worse off than under the current notice-and-choice regime.
Or as Jeong notes, such a system might simply encourage people -- especially the most vulnerable -- to effectively "sell" their privacy.
But the American Civil Liberties Union called the bill a Trojan horse. The Electronic Frontier Foundation said it would âincentivize people to give up their fundamental right to privacy and exacerbate inequality by specifically encouraging vulnerable lower-income people to pour more personal information into an industry that exploits and discriminates against them.â
And for what? As Kerry/Morris note: the "value" of any individual's private data is almost certainly quite low (which actually is reflective of the legitimately low "risk" associated with that data being revealed):
Indeed, the uncertainties of valuating any one individualâs data suggest that individuals will receive little payment. Estimates vary but
The Financial Times has a calculator that one of us (Kerry) ran for his profile. The default value is $0.007, but as a well-to-do professional who travels a lot, the value of the Kerry data was estimated as $1.78. If pricing is set by service providers, then the resulting system is likely to end up being very similar to the current âtake it or leave itâ outcomes that are common under notice and choice. If pricing is set by consumers or through negotiation, the complexity of the service-user interactions would be even greater. And this new complexity would likely slow usersâ access to information and services that they wantâand simply turn âclick fatigueâ into ânegotiation fatigue.â
Sure, governments could set the price, but does anyone think they will do so in a manner that makes any sense? The Kerry/Morris piece also notes that the government setting a price for our data likely would run afoul of the First Amendment based on the Sorrell v. IMS Health ruling, which ruled a Vermont law unconstitutional for barring data mining and pharma firms from being able to buy up prescription data.
And that's not even getting into the question of multi-party data -- such as e-commerce transactions:
Under an information-as-property regime, would both the purchaser and the retailer have property rights to information about the transaction? And in such a property regime, couldnât the retailer simply make as a condition of sale that the purchaser must grant a license to the retailer to use the information for specified uses? And wouldnât that simply lead to another form of the tyranny of fine print in which the purchaser who wants the convenience of an online purchase would be forced to cede rights to the retailer? It is unclear whether the information as property regime would in fact improve the current state of privacy.
There are plenty of legitimate reasons (and a few not-so-legitimate ones) to be concerned about the state of privacy today. But we're going to create a lot more problems if the "solution" is to turn your data into a commodity under some sort of property rights regime.
And the fact is we've been down this dumb road before. Hell, we've talked about it pretty much since the beginning of Techdirt with the silly idea of treating infinitely copyable "content" as "property" under copyright law. That's created a huge mess, especially for free speech, with the idea that expression can be owned and limited in some form or another. Indeed, many of the copyright debates over the past few decades concerning the internet are really representations of the ongoing struggle to recognize how a "content-as-property" regime under copyright law can possibly co-exist with a "global network for communicating and sharing content" world of the internet.
We won't do anyone any favors (other than maybe some lawyers) by adding in another category of made up "property" around "privacy." And, worst of all, it will do nothing to actually improve privacy outcomes for most people.
Filed Under: commodity, control, intellectual property, privacy, property right, trade offs