from the so-much-damage dept
We've been talking quite a bit about SESTA -- the Stop Enabling Sex Traffickers Act -- and why it's so problematic, but with hearings today, I wanted to dig in a bit more closely with the text to explain why it's so problematic. There are a large number of problems with the bill, so let's discuss them one by one.
Undermines the incentives to moderate content and to work with law enforcement:
This remains the biggest issue for me: the fact that the bill is clearly counterproductive to its own stated goals. When people talk about CDA 230, they often (mistakenly) only talk about CDA 230(c)(1) -- which is the part that says sites are immune from liability. This leads many people to (again, mistakenly) claim that the only thing CDA 230 is good for is absolving platforms from doing any moderation at all. But this actually ignores the equally important part of the same section: CDA 230(c)(2) which explicitly encourages platforms to moderate "objectionable" content, by noting that good faith efforts to moderate and police that content have no impact on your protection from liability in part (1).
In other words: as currently stated, CDA 230 says that you're encouraged to moderate your platform and takedown bad content, because there's no increase in legal liability if you do so. Indeed, it's difficult to find a single internet platform that does zero moderation. Most platforms do quite a bit of moderation, because otherwise their platforms would be overrun by spam. And, if they want people to actually use their platforms, nearly every site (even those like 4chan) tend to do significant moderation out of public pressure to keep certain content off. Yet, under SESTA you now face liability if you are shown to have any "knowledge" of violations of federal sex trafficking laws. But what do they mean by "knowledge"? It's not at all clear, as it just says "knowledge." Thus, now if a site, for example, discovers someone using its platform for trafficking and alerts authorities, that's evidence of "knowledge" and can be used against them both in criminal charges and in civil lawsuits.
In other words, somewhat incredibly, the incentive here is for platforms to stop looking for any illegal activity on their sites, out of fear of creating knowledge which would make them liable. How does that help? Indeed, platforms will be incentivized not to do any moderation at all, and that will create a mess on many sites.
The vague "knowledge" standard will be abused:
This is sort of a corollary to the first point. The problematic language in the bill is this:
The term ‘participation in a venture’ means knowing conduct by an individual or entity, by any means, that assists, supports, or facilitates a violation...
But what do they mean by "knowing conduct"? Who the hell knows. We already know that this is going to get litigated probably for decades in court. We have some similar problems in the DMCA's safe harbors, where there have been legal battles going on many years over whether the standard is "general knowledge" v. "specific knowledge" and what is meant by "red flag knowledge." And in SESTA the language is less clear. When people have attempted to pin down SESTA's sponsors on what the standard is for knowledge, they've received wildly varying answers, which just means there is no standard, and we'll be talking about lawsuits for probably decades before it's established what is meant by "knowledge." For companies, again, the best way to deal with this is to not even bother doing any moderation of your platform whatsoever, so you can avoid any claim of knowledge. That doesn't help at all.
The even vaguer "facilitation" language will be massively abused:
In that same definition of "participation in a venture" what may be even more problematic than the vague "knowledge" standard, is the vaguer claim that an entity "by any means, that assists, supports or facilitates a violation..." of sex trafficking laws, meets the standard of "participation in a venture." All three of those terms have potential problems. Assisting sounds like it requires proactive action -- but how do you define it here. Is correcting typos "assisting"? Is having an automated system suggesting keywords "assisting"? Is autocompleting search "assisting"? Because lots of sites do things like that, and it doesn't give them any actual knowledge of legal violations. How about "supporting"? Again, perfectly benign activities can be seen as "supporting" criminal behavior without the platform being aware of it. Maybe certain features are used in a way that can be seen as supporting. We've pointed out that Airbnb could be a target under SESTA if someone uses an Airbnb for sex trafficking. Would the fact that Airbnb handles payment and reviews be seen as "supporting"?
But the broadest of all is the term "facilitating." That covers basically anything. That's flat out saying "blame the tool for how it's used." Almost any service online can be used to "facilitate" sex trafficking in the hands of sex traffickers. I already discussed Airbnb above, but what about if someone uses Dropbox to host sex trafficking flyers? Or what if a sex trafficker creates advertisements in Google Docs? Or what if a pimp creates a blog on Wordpress? What if they use Skype for phone calls? What if they use Stripe or Square for payments? All of those things can be facilitation under this law, and the companies would have no actual knowledge of what's going on, but would face not only criminal liability but the ability of victims to sue them rather than the actual traffickers.
This is the core problem: this bill targets the tools rather than the law breakers.
Punching a hole in CDA 230 will be abused:
This is one that seems to confuse people who don't spend much time looking at intermediary liability protections, how they work and how they'll be abused. It's completely normal for people in that situation to not recognize how widely intermediary liability is used to stifle perfectly legitimate speech and activity. However, we know damn well from looking at the DMCA, in particular, that when you set up a process by which there might be liability on a platform, it's regularly abused by people angry about content online to demand censorship. Indeed, we've seen people regularly admit that if they see content they dislike, even if there's no legitimate copyright claim, they'll "DMCA it" to get it taken down.
Here, the potential problems are much, much worse. Because at least within the DMCA context, you have relatively limited damages (compared to SESTA at least -- the monetary damages in the DMCA can add up quickly, but at least its only monetary and it's limited to a ceiling of $150,000 per work infringed). With SESTA, criminal penalties are much more stringent (obviously) which will create massive incentives for platforms to cave immediately, rather than face the risk of criminal prosecution. Similarly, the civil penalties show no upper bound under the law -- meaning the potential monetary penalty may be significantly higher.
The chilling effects of criminal charges:
Combine all of this and you create massive chilling effects for any online platforms -- big or small. I already explained earlier why the new incentives will not be to help law enforcement or to moderate content at all, for fear of creating "knowledge" but it's even worse than that. Because, for many platforms, the massive potential liability from SESTA will mean they don't create any kind of platform at all. A comment feature on a website would become a huge liability. Any service that might conceivably be used by anyone to "facilitate" sex trafficking creates the potential for serious criminal and civil liability, which should be of great concern. It would likely lead to many platforms not being created at all, just because of the potential liability. For ones that already exist, some may shutter, and others may greatly curtail what the platform allows.
State Attorneys General have a terrible track record on these issues:
In response to the previous point, some may point out (correctly!) that the existing federal law already exempts federal criminal charges -- meaning that the DOJ can go after platforms if it finds that they're actively participating in sex trafficking. But, for as much as we rag on the DOJ, they tend not to be in the business of going after platforms just for the headlines. State AGs, on the other hand, have a fairly long history of doing exactly that -- including directly at the behest of companies looking to strangle competitors.
Back in 2010 we wrote about a fairly stunning and eye-opening account by Topix CEO Chris Tolles about what happened when a group of State Attorneys General decided that Topix was behaving badly. Despite the fact they had no legal basis for doing so, they completely ran Topix through the ringer, because it got them good headlines. Here's just a snippet:
The call with these guys was actually pretty cordial. We walked them through how we ran feedback at Topix, that how in January 2010, we posted 3.6M comments, had our Artificial Intelligence systems remove 390k worth before they were ever even put up, and how we had over 28k feedback emails and 210k user flags, resulting in over 45k posts being removed from the system. When we went through the various issues with them, we ended up coming to what I thought was a set of offers to resolve the issues at hand. The folks on the phone indicated that these were good steps, and that they would circle back with their respective Attorneys’ General and get back to us.
No good deed goes unpunished
So, after opening the kimono and giving these guys a whole lot of info on how we ran things, how big we were and that we dedicated 20% of our staff on these issues, what was the response. (You could probably see this one coming.)
That’s right. Another press release. This time from 23 states’ Attorney’s General.
This pile-on took much of what we had told them, and turned it against us. We had mentioned that we required three separate people to flag something before we would take action (mainly to prevent individuals from easily spiking things that they didn’t like). That was called out as a particular sin to be cleansed from our site. They also asked us to drop the priority review program in its entirety, drop the time it takes us to review posts from 7 days to 3 and “immediately revamp our AI technology to block more violative posts” amongst other things.
And, remember, this was done when the AGs had no legal leverage against Topix. Imagine what they would do if they could hold the threat of criminal and civil penalties over the company?
Similarly, remember how leaked Sony emails revealed that the MPAA deliberately set up Mississippi Attorney General Jim Hood with the plan to attack Google (with the letter Hood sent actually being written by MPAA outside lawyers?). If you don't recall, Hood used claims that, because he was able to find illegal stuff via Google, it meant he could go on a total fishing expedition into how it handled much of its business.
In the Sony leak, it was revealed that the MPAA viewed a NY Times article about the value of lobbying state AGs as a sort of playbook to cultivate "anti-Google" Attorneys General, who it could then use to target and take down companies the MPAA didn't like (remember, this was what the MPAA referred to, unsubtly, as "Project Goliath").
Do we really want to empower that same group of AGs with the ability to drag down lots of other platforms with crazy fishing expeditions, just because some angry Hollywood (or other) companies say so?
Opening up civil lawsuits will be abused over and over again:
One of the big problems with SESTA is that it will open up internet companies to getting sued a lot. We already see a bunch of cases every year where people who are upset about certain content online, target lawsuits at those sites just out of anger. The lawsuits tend to get thrown out, thanks to CDA 230, but lawyers keep trying creative ideas to get around CDA 230, adding in all sorts of frivolous attempts. So, for example, after the decision in the Roommates case -- in which Roommates.com got dinged for activity not protected by CDA 230 (specifically its own actions that violated fair housing laws) -- lots of people cite the Roommates case as an example of why their own argument isn't killed off by CDA 230.
In other words, if you give private litigants a small loophole to get around CDA 230, they try to jump in and expand it to cover everything. So if SESTA becomes law, you can expect lots of these lawsuits where people will go to great lengths to argue just about any lawsuit is not protected by 230, because of supposed sex trafficking occuring via the site.
Small companies will be hurt most of all:
There's this weird talking point making the rounds, that the only one really resisting SESTA is Google. We've discussed a few times why this is wrong, but let's face it: of all the companies out there, Google is probably best positioned (along with Facebook) to weather any of this. Both Google and Facebook are used to massive moderation on their platforms. Both companies have built very expensive tools for moderating and filtering content, and both have built strong relationships with politicians and law enforcement. That's not true for just about everyone else. That means, SESTA would do the most damage to smaller companies and startups, who simply cannot invest the resources to deal with constant monitoring and/or threats from how people use their platform.
Given all of these reasons, it's immensely troubling that SESTA supporters keep running around insisting that the bill is narrowly tailored and won't really impact many sites at all. It suggests either a willful blindness to the actual way the internet works (and how people abuse these systems for censorship) or a fairly scary ignorance level, with little interest in getting educated.
Filed Under: cda 230, congress, grandstanding, intermediary liability, knowledge, moderation, sesta