Appeals Court Doubles Down On Dangerous Ruling: Says Website Can Be Blamed For Failing To Warn Of Rapists
from the bad-cases-make-bad-law dept
Back in late 2014, we wrote about a case where the somewhat horrifying details were likely leading to a bad result that would undermine Section 230 of the CDA (the most important law on the internet). Again, the details here are appalling. It involves two guys who would use other people's accounts on a website called "Model Mayhem" to reach out to aspiring models, then lure them to their location in South Florida, drug them, and then film themselves having sex with the drugged women to then offer as online porn. Yes, absolutely everything about this is horrifying and disgusting. But here's where the case went weird. A victim of this awful crime decided to sue the large company Internet Brands, who had purchased Model Mayhem, arguing that it knew about these creeps and had failed to warn users of the service. Internet Brands had argued that under Section 230 it was not liable and the appeals court said no. The case was then reheard en banc (with a large slate of 9th Circuit judges) and they've now, once again, said that Section 230 does not apply.This case has been a favorite of those looking to undermine Section 230, so those folks will be thrilled by the results, but for everyone who supports an open internet, we should be worried. The rule here is basically that sites are protected from being held liable of actions of their users... unless those users do something really horrible. Then things change. It's further important to note that the two sick creeps who pulled off this scam, Lavont Flanders and Emerson Callum, weren't actually members of the Model Mayhem site. They would just use the accounts of others to reach out to people, so the site had even less control.
To get around the plain language and caselaw history around Section 230, the court has to quite carefully parse its words. It starts out by noting that Internet Brands clearly qualifies for the safe harbors as an internet platform. However, it bends over backwards to reinterpret a key part of CDA 230, that says you cannot treat such a platform "as a publisher or speaker" of information posted by users. Here, the court decides that the law requiring services to warn of potential danger do no such thing:
Jane Doe’s claim is different, however. She does not seek to hold Internet Brands liable as a “publisher or speaker” of content someone posted on the Model Mayhem website, or for Internet Brands’ failure to remove content posted on the website. Jane Doe herself posted her profile, but she does not seek to hold Internet Brands liable for its content. Nor does she allege that Flanders and Callum posted anything to the website. The Complaint alleges only that “JANE DOE was contacted by Lavont Flanders through MODELMAYHEM.COM using a fake identity.” Jane Doe does not claim to have been lured by any posting that Internet Brands failed to remove. Internet Brands is also not alleged to have learned of the predators’ activity from any monitoring of postings on the website, nor is its failure to monitor postings at issue.In other words, because the law only compels a form of speech -- i.e., a duty to warn people about creeps on your service -- as opposed to a duty to suppress speech, then Section 230 doesn't apply here. Bizarrely, the court points to the so-called "Good Samaritan" clause in CDA 230 (CDA 230(c)(1)) that further notes that any action that a site takes to moderate content cannot be used to create liability around other content on the site, as further proof for its position:
Instead, Jane Doe attempts to hold Internet Brands liable for failing to warn her about information it obtained from an outside source about how third parties targeted and lured victims through Model Mayhem. The duty to warn allegedly imposed by California law would not require Internet Brands to remove any user content or otherwise affect how it publishes or monitors such content.
Jane Doe’s failure to warn claim has nothing to do with Internet Brands’ efforts, or lack thereof, to edit, monitor, or remove user generated content. Plaintiff’s theory is that Internet Brands should be held liable, based on its knowledge of the rape scheme and its “special relationship” with users like Jane Doe, for failing to generate its own warning. Thus, liability would not discourage the core policy of section 230(c), “Good Samaritan” filtering of third party content.The court also rejects the idea that this ruling might chill free speech by leading to greater monitoring and censorship, basically just tossing it off to the side as unlikely to be a big deal:
It may be true that imposing any tort liability on Internet Brands for its role as an interactive computer service could be said to have a “chilling effect” on the internet, if only because such liability would make operating an internet business marginally more expensive. But such a broad policy argument does not persuade us that the CDA should bar the failure to warn claim. We have already held that the CDA does not declare “a general immunity from liability deriving from third-party content.” Barnes, 570 F.3d at 1100. “[T]he Communications Decency Act was not meant to create a lawless no-man’s-land on the Internet.” Roommates.Com, 521 F.3d at 1164. Congress has not provided an all purpose getout- of-jail-free card for businesses that publish user content on the internet, though any claims might have a marginal chilling effect on internet publishing businesses. Moreover, the argument that our holding will have a chilling effect presupposes that Jane Doe has alleged a viable failure to warn claim under California law. That question is not before us and remains to be answered.Some will, undoubtedly, argue that this limiting of Section 230 is a good thing, either because they already dislike 230, or because they believe that the behavior described above was so beyond the pale that it's fine to punish the platform for it. This is problematic. No one denies that the two individuals who committed these acts deserve to be in jail (for a long time). But blaming the platform that they used for not posting a warning seems extreme and does seem to confuse how Section 230 is supposed to work. The key point is in accurately putting liability on the parties who caused the action. That wasn't the website and it shouldn't be blamed.
You can now expect lots of cases citing this case as they look for any way to get past Section 230's protections.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: 9th circuit, rape, section 230, warnings
Companies: internet brands, model mayhem
Reader Comments
Subscribe: RSS
View by: Time | Thread
Warnings...
[ link to this | view in thread ]
Wouldn't they be covered then? Of course, it would be a useless warning, but that seems to be the claim.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
This isn't about free speech or safe harbors or anything like that, rather it's about what the company knew was happening (as a general concept) and failed to communicate those concerns to the models on their site.
I know they could not be specific as the guys were using hijacked accounts, but they could have communicated that such a scam was going on and that models should be extremely careful in this regard.
Knowing that this was going on (generally and not specifically) and doing nothing to inform their users is negligent IMHO. It's not a section 230 issue, it's general common sense.
[ link to this | view in thread ]
[ link to this | view in thread ]
Telephone Scams
[ link to this | view in thread ]
Re: Warnings...
[ link to this | view in thread ]
Warning
As such, your continued use of this site will be considered acceptance of the consequences of your own actions.
[ link to this | view in thread ]
So, horrible things neuter Section 230 now?
And what could be more horrible than copyright infringement?
[ link to this | view in thread ]
[ link to this | view in thread ]
I guess in a world where McDonald's coffee must put a label saying "caution hot" websites must put warnings about misuse too.
So why didn't she sue her ISP for allowing the website to come to her device?
It is sad she was victimized but it is only the two jerks that should be punished.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
INTERNET WARNING!!!
Please keep this in mind in all dealings and interactions you may have with others while visiting. Thank you and look out behind you, he's got a knife!
[ link to this | view in thread ]
"User" = The user committing the crime.
People who use crosswalks sometimes get run over by reckless drivers, and shouldn't cross the street if they're uninformed of the risk.
Also: Guns, motor vehicles and power tools.
[ link to this | view in thread ]
Useless warnings
People have been warning of 'creeps on the internet' for 20+ years, but people will still fall victim to predators. ANY social network that seeks to connect people, from Model Mayhem to LinkedIn, will have to have a warning to satisfy a ruling like this. Every. Single. Website. People will click past them faster than an EULA, rendering the warnings USELESS.
[ link to this | view in thread ]
[ link to this | view in thread ]
When I've discovered members acting against the best interests of my community, I've banned those members accounts along with their email addresses and IP addresses and also added such restrictions to my site's control panel that prevents those members from accessing my website in the future.
I'm sure Techdirt has similar policies where it concerns users harassing, intimidating or threatening other users who post on this site. If someone were to warn Mike Masnick that a registered member is behaving in a manner that is not permissable, and that Techdirt and/or Mike Masnick ignored the warnings and allowed the harassment to continue, then the owners of this website could be held liable for that conduct and that Section 230 wouldn't protect this website.
Section 230 just protects websites from content posted by their users. It doesn't protect websites from criminal activity conducted by their users, especially if they have been made aware of it and have knowledge of it.
[ link to this | view in thread ]
[ link to this | view in thread ]
This has more to do with criminal liability than it does with the CDA. The website owner incorrectly offered the wrong argument and that's why the court ruled against it.
It's no different than techdirt getting warned that someone was using its website to conduct illegal activities. Techdirt couldn't then argue that its protected under Section 230 of CDA because that would be the wrong argument. It would involve techdirt's knowledge that its website was being used to engage in criminal conduct and they would have no protection against that.
[ link to this | view in thread ]
[ link to this | view in thread ]
General knowledge vs Specific
Pretty much that exact same argument has been pushed by those looking to expand copyright law in the form of expansions to third-party liability, and the reason it fails there is the reason it doesn't work so well here.
There's a notable difference between general knowledge('Someone is using the site for illegal actions') versus specific knowledge('This particular person is using the site for this particular illegal act'). One carries an obligation on the part of the site if they want to continue to be covered under the law, the other does not. I'll let you figure out which is which.
Given I really doubt the individuals in this case were polite enough to notify the site, 'Hey, we're going to be using your site in order to facilitate the drugging and raping of people', the best they could have done upon learning that someone had used their site for that(how exactly they'd learn that it was the site as opposed to any other number of possible causes is a bit of a problem) would be a general warning to be careful, and that people don't always match how they present themselves online, which wouldn't likely have accomplished much other than causing a witch hunt and general distrust of the service in general.
Blame the two individuals responsible for their actions all you want, they certainly deserve it, but expecting sites to know things that they have no reasonable way of knowing, and punishing them for not responding 'properly' on the knowledge they don't have doesn't make sense and helps no-one.
[ link to this | view in thread ]
Re: General knowledge vs Specific
I posted the hypothetical above about stealing a cell phone and using it for rape.
If that modeling agency had specific knowledge that that phone was going to be used for rape then they would be liable if they didn't let some know.
However if the just knew that some of the phones that got stolen sometimes got used for rape, then would altering every girl who might use there agency(which would have to be every girls on the planet just in case) or have to tell every person you call, "Hello I'm from a big modeling agency, by the way sometimes people steal our phones and use them to lure girls into rape, would you like to work for us?"
That said, I'm sure there are some circumstances where the proper thing to do would have been to send out a notice, but that may not even work in this case. Looking at it from the model who got the emails point of view they may never have even gone to the web site. The model may have assumed the email was legitimate and continued all correspondence through that email. In that case the web site may net even be able to do anything.
Oh the other hand if the situation was something where the web site knew this was happening and got informed of an account that was hacked, they probably should have sent a warning email to all the people that got emails from that hacked account recently. I still don't think I would find liability in this case.
Just to finish this off. If the web site knew that the account was hacked by the same person who had previously used it to rape someone, or saw that there were email sent out that match the ones used for rape, and did nothing, then there may be a case for liability.
[ link to this | view in thread ]
Re:
Elections/emotions
They should use logic like a Vulcan but when it comes to subjects like rape, child porn, school shootings etc they have to consider how the irrational emotions of the public will perceive their rulings.
Being known as judge who found an Internet rape website not guilty makes it kinda difficult to get reelected.
[ link to this | view in thread ]
Re: Useless warnings
I'm sure it was far cheaper than campaigning against the proposition.
[ link to this | view in thread ]
Re: General knowledge vs Specific
The complaint goes so far to allege that the current owners of the website sued the prior owners for not telling them about these particular rapists being serial users of the website before selling the site.
I think the lawyer who drafted the plaintiff's complaint was a pretty smart guy; this was the best-pled case to argue that this was a "post a picture of the known dangerous person in your establishment" case rather than a "moderate comments" case.
[ link to this | view in thread ]
the key word there is "knew". She said the site knew.
She would need to prove this, but if it is true (and, considering the court ruled in her favor it likely was), if this or any other site became aware that it's site was being use to commit multiple serious, dangerous felonies, why shouldn't it have a duty to warn at-risk users?!
Again, the argument isn't that it was a hypothetical possibility that something bad could happen, but that it was a known fact to the owners that accounts were being fraudulently accessed and used to lure victims into horrible situations...and the site couldn't be bothered to do so much as send out a single email the warn users of the risk.
And, frankly, if they did know, the reason they didn't say anything was most likely to avoid negative publicity and/or loss of revenue, due to people avoiding the site on the grounds that it was dangerous.
People who put their own personal profits above other people's safety deserve to be shot. But I'll settle for legally liable if it's probable that that was what they did.
[ link to this | view in thread ]
Re: General knowledge vs Specific
In the mean time it wouldn't take much investigation to determined that all the victims were lured to the place they were abducted from by someone on the same site (and follow up with the people that allegedly contacted them and discover that they never did).
So, then it would be know that the same predator(s) was accessing other people's accounts on the same site to lure victims.
Now, if the site was notified that this was happening and didn't warn users and someone else was victimized...yeah, I can see them being reasonably held liable for not bothering to send out a damn email.
[ link to this | view in thread ]
Re: Re: General knowledge vs Specific
[ link to this | view in thread ]
Re: "User" = The user committing the crime.
[ link to this | view in thread ]
Re: Re: General knowledge vs Specific
Should they have provided a general warning? Probably, but like I said the information they had seemed to have been so scarce that it would have been general to the point of being near useless.
Unless I'm missing something significant they had no real information to go on beyond that someone was using their service for something illegal, and if that's enough to bypass 230 then the 'protection' it affords sites is going to take a beating as you can be sure others will argue that general knowledge, which was not considered sufficient before to trigger liability is now.
[ link to this | view in thread ]
Re: Re: Re: General knowledge vs Specific
I'm arguing that the complaint was specific enough to invoke California's tort of "failing to warn of a harm you have some foreknowledge of," which is somewhat particular to California law.
I get the sense you're arguing that Cal. law is dumb. This may be true; I have often disagreed with the choices of the Cal. Supreme Court and legislature.
But the "dumbness" of the law doesn't matter for CDA 230. Only whether the liability comes from being a "publisher" of material on the internet. And if it's a super-general "there might be crime" warning requirement, that doesn't really intersect with CDA 230.
[ link to this | view in thread ]
Re: Re: Re: General knowledge vs Specific
Here, the complaint says the website owners knew that someone was trolling the website to commit actual violent crimes against the users, and in fact knew enough to provide specific details about the scheme (like, "be really careful about randos who want you to go to Florida"). Furthermore, this is under California law, which is the most liberal of jurisdictions on this particular issue, and unless every case brought in California gets Cal. law (not likely for NY residents, etc.), this is limited.
[ link to this | view in thread ]
Re: "User" = The user committing the crime.
Similarly, the modelling world is fraught with danger from creeps and perverts luring naive women and girls who hope to develop a career into their evil clutches. I've heard loads of horror stories about it, which makes me wonder why the so-called respectable modelling agencies don't get held responsible for exposing the women and girls in their care to the possibility that some perve might take advantage of them.
Women need to be a bit cynical and take precautions when availing themselves of potential opportunities, i.e. bring a friend, etc. We shouldn't have to, but perves will be perves. This applies to anyone who goes to places they don't know to transact business with total strangers. It might be legit, but be aware that it might not be.
[ link to this | view in thread ]
Re: INTERNET WARNING!!!
Should be emphasized. You don't see road operators being blamed for drugs going through their network. Same with transportation companies. You don't close markets because a girl was raped in their toilet. You only do that when the owners actually assisted in the crime.
But hey, it's on the Internet, let's just throw logic and reason through the window, yes?
[ link to this | view in thread ]
Re:
"Hey, site! Some rapists seem to be using your site to do rape-y stuff."
OR
"Hey, site! John Doe has used and still uses your site to do rape-y stuff, here's evidence."
OR
"hey, site! You are actively assisting rapists in their rape-y stuff or engaging in rape-y stuff yourself!"
Can you see the difference? Even the second case has nothing to do with the site. The authorities are the ones that must act. The site must assist but it's not their job to be the police.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: Re:
And so I'm wondering whether we're arguing about what the law is, or what the law should be.
If it's what the law should be, carry on -- I think California tort law is too plaintiff-friendly generally. But arguing about what the underlying law should be doesn't really help when talking about how the law as it currently is interfaces with CDA 230.
And here's why I think the CDA 230 defense doesn't quite work here.
Let's say that California had a law saying that all providers of a service, whether in the real world or electronic, paid or unpaid, etc., had to provide a puppy to every 1000th customer free of charge or face civil liability from the customer who wanted a puppy and didn't get one. Let's further say, to get all our facts in a row, that it was challenged on all relevant state and federal grounds by the United States Chamber of Commerce and both the California Supreme Court and the United States Supreme Court upheld the law so that, no matter how dumb and counterproductive this puppy-providing law is, it only legislative action could change it.
Someone signs on to a CDA-covered service in California as the 1000th user. Doesn't get a puppy. Immunity?
I would say not, because the duty exists independently of the fact of the service.
[ link to this | view in thread ]
Re: Re: Re:
The law says specifically that the provider is not responsible for what the users do so you do have a point but it doesn't apply to this case. The law specifically exempts the provider from liability for what their users do. Go after the rapists, not the service.
[ link to this | view in thread ]
and then they twist it against you
What do you mean, you didn't know there was a problem? Everyone knows that there are dangers on the Internet. Yes, that same Everyone that you need to provide the warning to.
[ link to this | view in thread ]
Informed consent
Looking at the whole click-wrap phenomenon, and the contracts we're expected to sign for essentials like cell phones or internet service, it appears to me our entire economy is based on forcing people into odious contracts before they know what they're doing and then exploiting their predicament before they're locked in.
Rather than facilitate our customer's lives, we instead seek to entrap them and bleed them like parasites.
(Regarding the modeling world, the creepiest stories I heard were the recruiters outside anorexia hospitals. Not all exploitation of these poor girls is sexual.)
[ link to this | view in thread ]
Re: Useless warnings
[ link to this | view in thread ]
Re: Re: Re: Re:
The critical words here are "treated as" -- if I say that you have an independent duty that the original party could never be held liable for, then I'm not treating you as that party.
I've seen this in other liability contexts, such as where a hospital that can't legally be sued for "medical malpractice" is sued for "negligent supervision" of the staffers that committed the malpractice.
[ link to this | view in thread ]
Re: Warnings...
[ link to this | view in thread ]
Re: INTERNET WARNING!!!
[ link to this | view in thread ]
Re:
One of the primary unwritten jobs of a judge (because it would be politically incorrect to write it) is to enforce political correctness.
[ link to this | view in thread ]
Re:
Learn to read. These guys weren't members.
[ link to this | view in thread ]
Re: Informed consent
[ link to this | view in thread ]