from the pleading-matters dept
On the surface Herrick v. Grindr seems the same sort of case as Daniel v. Armslist (which we wrote about last week): it's a case at an appeals court that addresses the applicability of Section 230, meaning there is a reasonable possibility of it having long-lingering effect on platforms once it gets decided. It's also a case full of ugly facts with a sympathetic plaintiff, and, at least nominally, involves the same sort of claim against a platform – in Armslist the claim was for "negligent design," whereas here the claim is for "defective design." In both cases the general theory is that because people were able to use the platform to do bad things, the platforms themselves should be legally liable for the resulting harm.
Of course, if this theory were correct, what platform could exist? People use Internet platforms in bad ways all the time, and they were doing so back in the days of CompuServe and Prodigy. It is recognition of this tendency that caused Congress to pass Section 230 in the first place, because if platforms needed to answer for the terrible things their users used them for, then they could never afford to remain available for all the good things people used them for too. Congress felt it was too high a cost to lose the beneficial potential of the Internet because of the possibility of bad actors, and so Section 230 was drafted to make sure that we wouldn't have to. Bad actors could still be pursued for their bad acts, but not the platforms that they had exploited to commit them.
In this case the bad act in question was the creation and management of a false Grindr profile for Herrick by an ex-boyfriend bitter about their breakup. It led to countless strangers, often with aggressive expectations for sex, showing up at Herrick's home and work. There is no question that the ex-boyfriend's behavior was terrible, frightening, inexcusable, and, if not already illegal under New York law, deserving to be. But only to the extent that such a law would punish just the culprit (in this case the ex-boyfriend who created the fake profile).
The main problem with this case is that Herrick is seeking to have New York law extend to also punish the platform, which had not created the problematic content. But the plain language of Section 230 – both in its immunity provision along with its pre-emption provision – prevents platforms from being held liable for content created by others. Herrick argues that Grindr should be held liable anyway "because it knowingly facilitated criminal and tortious conduct." But that's not the standard. The standard is whether the platform created the wrongful content, or, at minimum, in the wake of Roommates, had a hand in imbuing it with its wrongful quality. But here there is no evidence to suggest that Grindr had anything to do with the creation of the fake profile. It was the awful ex-boyfriend who was doing all the malfeasant content supplying.
But here's where the two cases part company, and where the Grindr one gets especially messy. The good news for Section 230 is that this messiness may make it easy for the Second Circuit to resolve in favor of Grindr and leave Section 230 unscathed. The bad news is that if the Second Circuit decides the other way, it will be very messy indeed.
One of the core questions in most lawsuits involving Section 230 is whether the platform itself is an interactive computer service provider, and thus protected by Section 230 for lawsuits seeking to hold them liable for content created by others, or whether it is instead a non-immune "information content provider." Part of the problem with this case is that when Herrick filed the lawsuit originally, the pleading acknowledged that it was an interactive computer service provider. Later when he was fighting the motion to dismiss he changed its mind, but that's a problem. You don't usually get to change your mind about these critical elements of your complaint without repleading it. (Which is one of the reasons Herrick is appealing; the dismissal was "with prejudice," meaning it wouldn't easily be able to re-plead at this point, and Herrick wants another chance to amend his complaint.)
But that's only one of the pleading problems. A plaintiff also has to put forth a plausible theory of liability at the outset, in large part so that the defendant can be on notice of what it is being accused of to defend itself. It's not unusual for theories of liability to evolve as litigation proceeds, but if the theory changes too much too late in the process it raises significant due process problems for the defendant. Which seems to be happening here. The story Herrick told the Second Circuit about why it thought Grindr should be liable for the harm Herrick suffered differed in significant ways from the story it had told at the outset, or to the trial court. This change is one reason why the case is particularly messy, and may be messier still if the Second Circuit allows it to continue anyway.
At issue is what Herrick told the Second Circuit about his harassment. According to him now, strange men were showing up in his life not just constantly but everywhere he went. Yet according to the record at the trial court, they only showed up in two places: his home and his work. Which is not to say, of course, that it's ok for him to have these people harass him at either place (or any place). The issue is that this "everywhere" v. "only in two places" distinction significantly affects his theory of the case and therefore the merits of his appeal.
Because the argument he pressed at oral argument was that it was Grindr's geolocation service that removed the case from Section 230's purview. According to him there must be some bug in Grindr that allows these strange men to know where he is and seek him out, and so, he thinks, Grindr should be liable for not fixing this defect.
However there are a number of problems with this theory. First, it is highly implausible. For it to be true Grindr would need to not only still be tracking him (even as an ex-user) but then, for some unknown reason, somehow unite the location data of the actual Herrick person with the fake Herrick profile. Herrick tried to argue that the first part was likely, citing for instance Google's location services continuing to track users after they'd thought it had stopped. But even if it were true that Grindr had continued to track him, it would be really random to associate that data with any other account he didn't control. From Grindr's point of view, his real account and the fake account would look like two completely separate users. Sure, Grindr could have a bug that mis-associated location data, but there's no reason for it to pick these two completely different accounts to merge the data from. It would be just as arbitrary as if it mixed up his data with any other Grindr account.
Furthermore, there is zero evidence to suggest that the fake account used the geolocation data of anyone at all, other than perhaps the ex-boyfriend, who was operating the account. There certainly is no evidence to suggest that it was somehow using Herrick's actual data, and that's why the factual distinction about where he was harassed matters. If it truly was everywhere then he might have a point about the app having a vulnerability, and if so then perhaps his defective design claim might start to be colorable. But the only information he's alleged is that he was harassed in those two places, home and work, and no one needed to use any geolocation data to find him at either of these places. The ex-boyfriend knew of these places and could easily send would-be suitors to them directly via private messages. In other words, the reason they turned up at either of these places was because of content supplied by a third party (the ex-boyfriend). This fact puts the case clearly in Section 230-land and makes the case one where someone is trying to hold a platform liable for harm caused by how another communicated through their system.
Finally, an additional problem with this theory is that even if it were correct, and even if there were some evidence that the geolocation was allowing strangers to harass him everywhere, it needed to have come up before the appeal. The purpose of the appeal is to to review whether the first court made a mistake. Belatedly supplying more information for the benefit of the appeals court will not help it decide whether the first court made a mistake because that court could only have done the best it could with the information available to it. It isn't a mistake not to have had the benefit of more, and to add more at this late date would be incredibly unfair to the defendant. As it was, by pressing this new "he was tracked everywhere" theory at oral argument it left Grindr's counsel in the unenviable and risky position of having to field extremely hypothetical questions from the judges about their client's potential liability based on facts nowhere in the underlying record. It was uncomfortable to listen to the judges push Grindr's lawyers on the question of whether some hypothetical software bug that they had never contemplated, and likely doesn't exist, might undermine their Section 230 protection. To their credit they fielded the hypo on the fly pretty well by reminding the judges that Section 230 covers how platforms are used by other people, regardless of whether they are used appropriately or exploitatively. But given the way this case was pleaded from the outset, this hypo should never have come up, especially not at this late juncture.
So one of the overarching concerns about this case is that because this theory did not coalesce until it had reached the appeals court, it left the central legal questions it raised under-litigated, thus inviting poor results if the Second Circuit now gives them any credence. But that's not the only concern. It may still be an ominous harbinger, for even if Herrick loses the appeal, it may not be the last time we see this "software vulnerability makes you lose Section 230 protection" theory put forth. It foreshadows how we may see future privacy litigation wrapped-up as defective design cases, and, worse, it may encourage plaintiffs seeking to do an end-run around Section 230 to try to package their claims up as privacy cases.
Also, what Herrick asked for in his appeal was a remand back to the trial court to explore all these under-developed evidentiary issues. Was there a software bug? Was Grindr continuing to track former subscribers in a way they didn't know about? Was there a privacy leak, where the fake profile was somehow united with the geolocation of a real person? Herrick believes the case shouldn't have been dismissed without discovery on these issues, but early dismissal is a big reason why Section 230 provides valuable protection to a platform. It is extremely expensive to go through the discovery stage – in fact, it's often the most expensive stage – and if platforms had to endure it just so plaintiffs could explore paranoid fantasies with no evidence to give them even a veneer of plausibility, it will be extremely destructive to the online ecosystem.
On the upside, however, unlike the Wisconsin Court of Appeals in the Armslist case, after listening to the oral argument I'm relatively confident that the judges will be able to respect prior precedent upholding Section 230, even in these awful cases, and resist reaching an emotional conclusion that strays from it. Also, given the issues with the pleading and such – which at oral argument the judges flagged – there may be enough procedural problems with Herrick's case to make it easy for the court to dispense with it without causing damage to Section 230 jurisprudence in the Second Circuit in the process. But if these predictions turn out to be wrong, and if it turns out that these procedural issues pose no obstacle to the court issuing the remand Herrick seeks, then we might have to contend with something really ugly on the books at a federal appellate circuit level.
Filed Under: cda 230, intermediary liability, section 230
Companies: grindr