Facebook's Threat To NYU Researchers Is A Mistake, But It's The Inevitable Follow On To Overreaction To Cambridge Analytica
from the research-v-abuse dept
Late on Friday news came out that Facebook had sent a cease and desist letter to researchers at NYU working on the Ad Observatory project. At issue was that the project had asked people to install a browser extension that would share data back to NYU regarding what ads they saw. Facebook -- responding to significant criticism -- has put forth an ad library that allows researchers to search through what ads are being shown on Facebook and who is behind them (which is good transparency!), but it does not show how those ads were targeted. This is what the researchers at NYU were trying to collect data on. And that is a reasonable research goal.
Facebook has argued that this is a breach of Facebook's terms of service -- though it does seem notable that this is coming out right around the same time that these very same researchers discovered that Facebook's promise to properly label political ads isn't working so great (it's a tangent, but this is why promising to label political ads may be problematic in the first place: you're going to miss a bunch, especially on a platform this big).
The Knight 1st Amendment Center at Columbia is representing the researchers and is condemning this move (and the researchers are refusing to comply with the cease-and-desist). Here's the Knight Center's litigation director Alex Abdo:
“Frankly it’s shocking that Facebook is trying to suppress research into political disinformation in the lead-up to the election. There’s really no question more urgent right now than the question of how Facebook’s decisions are shaping and perhaps distorting political discourse. It would be terrible for democracy if Facebook is allowed to be the gatekeeper to journalism and research about Facebook.”
This is not the first time that the Knight Center and Facebook have clashed over research. Two years ago, the Center had asked Facebook to create a special safe harbor for journalists doing research on the platform, to avoid having them face CFAA claims. Facebook declined.
I think this is a bad move by Facebook and a huge mistake. I think it's a mistake on multiple levels: political, technological, legal and just on general PR. However, I will give one sliver of a defense to Facebook: it likely feels somewhat forced into this because of the years-long over-reaction to Cambridge Analytica. Cambridge Analytica did a bunch of bad stuff, and it started out as an "academic" claiming (misleadingly) that he was merely doing "academic research" on Facebook users -- creating a Facebook app that would get users to basically cough up data about their entire social graph. The "academic" then gave the data over to the company (violating the rules), which then became part of a political advertising machine. Eventually, Facebook was pretty significantly fined, in part because of Cambridge Analytica's ability to extricate and share that data.
You may note some similarities. Ostensibly academic researchers asking users to install something to collect some data about Facebook users, and then collecting that data for "research." Given what happened with Cambridge Analytica, you might see why some folks within Facebook would be reasonably gunshy about letting this happen again -- and that may explain the company's aggressive legal response. And, you and I can say "but NYU is a respected institution -- they're not going to abuse this data" but the guy who did the data for Cambridge Analytica was initially at Cambridge, also a respected university.
You might also argue that since Facebook has been dinged by the FTC, in part over this, and the consent decree doesn't really specify that this there are separate rules for "legitimate" research, it has to do this. Indeed, that's kind of what Facebook itself is now arguing in response to this:
2/ We protect people's privacy by not only prohibiting unauthorized scraping in our terms, we have teams dedicated to finding and preventing it. And under our agreement with the FTC, we report violations like these as privacy incidents.
— Rob Leathern (@robleathern) October 25, 2020
However, I believe the scenario here is quite different for a few key reasons. The original data used (eventually) by Cambridge Analytica was created via a Facebook app, for which the developer had to sign an agreement with Facebook that promised not to use the data in this way. And he was using the setup of Facebook's own tools to extract this data. That is, it was Facebook's API making this information available.
This is very, very different from what the NYU researchers are doing. They're asking users to install a browser extension (i.e., outside the Facebook ecosystem and not using the Facebook API or signing any kind of agreement with Facebook) in order to extract data from their own computers (again, not via the Facebook API, or the Facebook servers). And while Facebook may not like it, it's problematic that the company is arguing that it can step in and argue that end users can't -- by their own choice -- install an app they want on their own computers to capture the data that is on their own computers.
So, I think there's a pretty strong argument that me on my computer installing software to collect data in my own browser is not "unauthorized access" in any sense of the term, no matter what Facebook or the FTC might think. In fact, I'd argue that thinking that me collecting data out of my own browser and willfully handing it over to someone I chose to is, frankly, none of Facebook's business -- and it suggesting that this is a form of "unauthorized access" (which has a specific meaning under the CFAA) is crazy.
Indeed, this gets back to the infamous (and dangerous) lawsuit between Facebook and Power, in which Facebook won a CFAA claim, in part because it had sent a cease-and-desist. I still think this case was wrongly decided and the ramifications are huge, and are one of the reasons why Facebook remains so dominant today. But, in that case, again, it involved a third party offering a service to end-users who willingly chose to use the software, in order to gain access to their Facebook data in order to interact with it in a different way (in Power's case via a universal social media dashboard).
Unfortunately, I fear that the similarities to the Power case make this dangerous for the NYU researchers -- though they make a much more sympathetic defendant than a for-profit startup. And, the facts here are somewhat distinguishable from the Power case (the app here is simply collecting data in a user's browser, rather than doing an independent login and sucking out data).
We should be able to install whatever software we want -- even if Facebook doesn't like it -- to access data on our own computers. Facebook should have no say in that, and shouldn't be able to reach into our computers with a legal effort to block it. If the services were somehow damaging Facebook's technology, I could understand the issue. But this is just about reading the information on your own computer sent by Facebook and collecting and sharing that information, and it's done entirely outside the Facebook ecosystem. Facebook should not only drop the threat, but it should actually support this kind of important research.
Filed Under: academic research, ad observatory, consent decree, political advertising, privacy, research, threats
Companies: facebook, nyu