Is It A Privacy Violation For Companies To Make Inferences About What You Might Like?
from the better-service-is-a-privacy-violation? dept
As the debate over privacy issues and whether or not the US needs a specific privacy law has continued, it seems that some may be over-focusing on what they believe needs to be private. While I'm a big supporter of basic privacy rights, where it becomes quite troubling is when people seek to make information that has no basic expectation of privacy into private information. We've discussed "obvious" cases, such as with the right to be forgotten, which is under discussion in Europe. But what about other cases. Professor Paul Ohm, in an otherwise interesting interview about how difficult it is to have truly anonymous datasets, also suggests that we should outlaw making inferences from data:We have 100 years of regulating privacy by focusing on the information a particular person has. But real privacy harm will come not from the information they have but the inferences they can draw from the data they have. No law I have ever seen regulates inferences. So maybe in the future we may regulate inferences in a really different way; it seems strange to say you can have all this data but you can’t take this next step. But I think that‘s what the law has to do.Why does the law "have" to do that? At some point, aren't we taking it too far? Certain things should reasonably be kept private, but if a company is taking data that it legitimately has access to, and is able to make inferences from it, is that so wrong? Google tries to improve search rankings based on the inferences it makes from how people search. Your spam filter improves based on the inferences it makes over your data. As Adam Thierer points out in response to Ohm, there are all sorts of reasons why companies should be allowed to make inferences from data:
- Example 1: Your local butcher may deduce from past purchases which types of meat you like and suggest new choices or cuts that are to your liking. This happened just this past weekend for me when a butcher at my local Balducci’s grocer recommended I try a terrific cut of steak after years of watching what else I bought there. And because I am such a regular shopper at Balducci’s, I also get special coupons and discounts offered to me all the time based on inferences drawn from past purchases. (I have a very similar experience at a local beer and wine store).
- Example 2: Your mobile phone provider may draw inferences from past usage patterns to offer you a more sensible text or data plan. This happened to me last year when Verizon Wireless cold-called me and set up a much better plan for me.
- Example 3: Your car or home insurance agent may use data about your past behavior to adjust premiums or offer better plans. When I was teenage punk, my family’s insurance company properly inferred that I was a bad risk to them (and others on the road!) because of multiple speeding tickets. I paid higher premiums as a result all the way through my 20s. But, as I aged and got fewer tickets, they inferred I was a better bet and gave me a lower premium.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: data, inferences, privacy
Reader Comments
Subscribe: RSS
View by: Time | Thread
Inference ~ Decisional Interference
The problem isn't that inference isn't useful or indeed desirable moreover that i) not everyone wants it in every or the same situations as others and ii) that if the inference is made to benefit the inferer rather than the inferee then we enter all sorts of sticky icky territory.
[ link to this | view in thread ]
Inferences as such are not bad, but..
[ link to this | view in thread ]
Re: Inferences as such are not bad, but..
When someone looked up a fare they were quoted something like £120 but they waited a day before going back to book it. By then the price had risen to £230 odd.
When they cleared their browser cookies the price mysteriously dropped back to £120.
The airline were fiddling with pricing based on a consumer expressed intent that wasn't followed through.
i.e. we can infere you are interested in this flight as you looked at it yesterday so we've upped the price.
[ link to this | view in thread ]
Inferences or Context
Example 1: what if your past purchases at your local butcher were then made available to insurance companies for the purpose of pricing your health insurance? Perhaps, they don't get just yours, but everyone in your neighborhood and decide that because on average people in your 'hood are obese then you will be charged a significantly higher health insurance premium. While you had no problem with this info being used towards giving you discounts and coupons on future purchases, you might be much less comfortable with this same information being used towards figuring out your health insurance premiums.
Example 2: Unbeknown to you, one of your very close friends since college has been traveling to Syria quite frequently, but since he lives across the country, you're only aware that he has been traveling on business. Turns out his business has to do with buying Syrian antiquities but the people he has been interacting with are somewhat suspect. Because the two of you communicate every month or so, and you also communicate somewhat regularly with some other mutual friends, inferences start getting drawn between all of you regarding your potential involvement with terrorism. Getting these inferences a wrong when it comes to recommending you a different calling plan may be no big deal, but getting it wrong when it comes to whether you might be related to a terrorist, is a very different story.
Example 3: The insurance companies have decided that correlating your activities online with your propensity for risk is a better indicator than whether you get speeding tickets. In some cases it's not just risk, but the fact that your reading preferences and the frequency of your shopping indicates a sedentary lifestyle, and hence you present risks behind the wheel as well as raise issues around longevity.
In other words, information that may be harmless in one context, when viewed in another can become very uncomfortable for you. It's very possible that you might not participate in various programs if you were aware of this. Since the value proposition that some of these data collectors gain is the externality value of your information, then you should at least have some control or be aware of it so you can make an educated decision about sharing this information about yourself. Opt-out is an unethical concept on its face because it readily implies that you should have to take an action to not have your information used in ways that you are not aware of.
Anyway, sorry for the long response, but the privacy issue is that in fact no information is good or bad, private or non-private (ie. lots of people I don't know, know where I live), it's more that in different contexts it can take on different values. Something that could be fine in one context may be very detrimental in another, so you should have the right to decide the context (or use right) under which you're willing to share information about yourself.
[ link to this | view in thread ]
[ link to this | view in thread ]
Privacy
Cold-calling is illegal here in Germany, as is sharing of any personal information without express consent (and it's also illegal to tie such consent to benefits). Even in the case of customer loyalty cards, thee are harsh limits on what information can be shared between participating companies who all support one particular card (think Walgreen's, Safeway, Foot Locker, Quizno, Chipotle and your local private electric supplier all sharing one SuperCustomerCard).
The Founding Fathers in the US never dreamed that personal privacy would need to be protected or they would have added that to the Bill of Rights. The laws here in Germany were written long after such a need was clear.
[ link to this | view in thread ]
And this is something that I believe is already covered by some legislation. In the UK, the Data Protection Act regulates the use of personally identifiable information. If I remember correctly, a record qualifies as personally identifiable information if it uniquely identifies someone, so if they had my address and I lived in a house with 5 others it would not count, but if I lived on my own it would. Presumably Netflix rental data would fall under this too if it were possible to identify me uniquely. The DPA does not stop companies from offering a better service to their data subjects (or even ripping them off :) based on the information they have; it principally regulates distribution to others and mandates advertising / database opt outs.
[ link to this | view in thread ]
Yes and no
There's regulations like that in the UK and they sort of work, as well as "Telephone preference" and "mailing preference" lists to prevent junk marketing, but they are muddy at best, poorly advertised and usually bent by lobbying for exceptions.
[ link to this | view in thread ]
Re: Re: Inferences as such are not bad, but..
Imagine at the grocery whilst choosing tomatoes, you put one down only to pick it up again later but this time the price is substantially higher.
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Re: Re: Inferences as such are not bad, but..
I know a browser can't be a 100% tie to a specific person, but typically it is.
[ link to this | view in thread ]
Re: Re: Re: Re: Inferences as such are not bad, but..
[ link to this | view in thread ]
Re: Inferences or Context
I have the feeling that it's going to take a good amount more of my thinking power than I can bring to bear at the moment, but I might have to disagree on the idea of opt-out being unethical on the basis of it's requiring action from the individual. I can't help but feel uncomfortable when we draw the line in a place where making an observation requires prior restraint.
It may be because of my experience in social science research, but I believe that whenever observations about my behavior are observed, and may serve to make a direct impact upon my life, I would like to be aware of this, and informed as to the intent. Then, much as I can do when my phone asks me if I want to share my location with a software developer, I can choose to opt out. Much as with institutionalized research, it's informed consent that I'm seeking, but observations that do not have a direct impact on my immediate existence are not my biggest concern.
[ link to this | view in thread ]
Re: Re: Inferences as such are not bad, but..
How is that any different than a used car salesman sizing up his customer before negotiating a price? Is he not allowed to take into account you leaving and returning the next day?
You may call the used car salesman unscrupulous, but this should not be criminal behavior. Competition should take care of these kinds of problems. If you don't like the salesman, go to the dealership down the road. There are ways to report bad business practices (BBB in the US) and with the online world allowing people to complain pretty loudly, this seems like something we should keep regulators out of before they may it much much worse.
[ link to this | view in thread ]
Re: Privacy
[ link to this | view in thread ]
Re: Re: Inferences as such are not bad, but..
[ link to this | view in thread ]
Re: Inferences or Context
(2) Meanwhile, Thierer's examples all involve first-party use of data – the collection and use of which would tend to align with an ordinary consumer's expectations.
Seems like (1) is more problematic than (2) and is a valid distinction for any privacy law or regulation to make.
[ link to this | view in thread ]
Re: Re: Inferences or Context
An opt-out scenario would be if the phone automatically assumed it was allowed, but if you chose to go into some settings menu somewhere you could choose to turn that off.
[ link to this | view in thread ]
http://open.salon.com/blog/virginia888/2010/12/02/is_topix_giving_out_users_personal_data_to_the _nsa
A short list of privacy violations conducted by Topix:
Violation of encryption, open accessing of user's IP addresses, supporting hackers, sharing users personal data with the NSA without their permission or knowledge.
[ link to this | view in thread ]
And he leaves out the most important one. The thing being talked about is this proposal:
Regulating inferences. What are "inferences", exactly? Basically, inferences are thinking. So they're talking about regulating thinking.
That's right. Thoughtcrime. Because figuring out how to better filter spam from your customers' webmail is doubleplusungood.
[ link to this | view in thread ]