from the guys,-what? dept
Because today wasn't insane enough, just hours after Senator Josh Hawley released his ridiculous bill to flip Section 230 on its head and turn it from a law that protects against frivolous lawsuits into one that would encourage them, the Justice Department has released recommendations for Section 230 reform that appear to have been written by people who haven't the first clue about how content moderation works online.
Professor Mark Lemley sums it up best:
One of the things many have pointed out about the "bipartisan" hatred for Section 230 is that each party hates it for opposite reasons. That is, Republicans are mad that sites are taking down too much content, while Democrats are mad that not enough content is being taken down. And they both (incorrectly) blame Section 230 for "enabling" that situation.
The DOJ's attempt to thread the needle on this is saying that sites should both take down more and less content at the same time. No, seriously. First, the DOJ says that platforms should be liable for lots of content on their platform if they don't magically stop it:
Incentivizing Online Platforms to Address Illicit Content
The first category of recommendations is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation claims. These reforms include a carve-out for bad actors who purposefully facilitate or solicit content that violates federal criminal law or are willfully blind to criminal content on their own services. Additionally, the department recommends a case-specific carve out where a platform has actual knowledge that content violated federal criminal law and does not act on it within a reasonable time, or where a platform was provided with a court judgment that the content is unlawful, and does not take appropriate action.
Then, on the flip side, the DOJ wants to make sure that platforms don't take down too much (read: "silence nazis who support President Trump") because [reasons].
Promoting Open Discourse and Greater Transparency
A second category of proposed reforms is intended to clarify the text and revive the original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users. One of these recommended reforms is to provide a statutory definition of “good faith” to clarify its original purpose. The new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.
The specifics of how they want these two things done are utter nonsense. In order to force more taking down of "bad" content, the DOJ literally says that "bad actors" no longer get 230.
Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230’s immunity provision—“Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material”—makes clear that Section 230 immunity is meant to incentivize and protect responsible online platforms. It therefore makes little sense to immunize from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.
But what does that even mean? The "facilitates" bit in there is what's problematic -- and that's the same language we've already seen in FOSTA, which has created massive collateral damage and frivolous lawsuits against CRM and mailing list software providers. Does offering up some mailing list software that might be used for criminal activity make you a "bad" actor?
Then there's a further FOSTA-like expansion, carving out exceptions beyond sex trafficking to include "child abuse, terrorism, and cyber-stalking."
Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.
Again, as we've seen with FOSTA, this expansion is only being use so far for frivolous cases -- and that would definitely be the case here as well. We've already seen a ton of totally frivolous "let's blame Twitter and Facebook for terrorists" lawsuits, all of which are getting thrown out on 230 grounds. The DOJ's proposed legislation would bring those right back -- allowing people who were killed by terrorists... to sue Twitter. Because, why, now?
Then the DOJ says that the entire setup of 230 should be changed:
Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.
That's like the whole point of 230. All this would do is turn 230 into a "notice-and-takedown" statute, because as soon as you received notice of any alleged issue, you'd risk liability if you kept it up. We've already seen how that works in the DMCA context, in which tons and tons of perfectly lawful speech is taken down just to avoid liability. This is a proposal for widespread censorship (which is ironic, given that the DOJ claims part of its goal is to try to encourage less silencing of voices).
On that front, the proposals are equally uninformed.
First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230(c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform's ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”
It's important to note that nearly all of the litigation on Section 230 has focused on (c)(1) and not (c)(2), but whiny conservatives who insist that sites are "censoring" them really hate the "otherwise objectionable" language, because it makes it clear that sites have the right to moderate how they see fit. Wiping that out is bizarre, because it would suddenly exempt things like spam filtering from Section 230.
Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of "good faith" should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
This is similar to Hawley's proposal, and raises serious 1st Amendment questions. The government cannot and should not be determining whether editorial decisions are made in "good faith."
Then there's the funniest part of the DOJ proposal. Because everyone who understands anything about how content moderation works has pointed out that all of these kinds of changes would create a true "moderator's dilemma" (no liability if you don't look, so you don't want to look, or if you look you feel the need to heavily moderate everything to avoid liability), the DOJ just says "oh yeah, add in something saying there's no moderator's dilemma." Really.
Explicitly Overrule Stratton Oakmont to Avoid Moderator’s Dilemma. Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230(c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.
But none of that fixes all of the other parts of the moderator's dilemma created by this very proposal.
Separate from those two major prongs, the proposed reform hits on two other points, both of which are kind of odd. First, it says it would open up civil enforcement actions from the federal government, making them exempt from Section 230 as well. Already, federal criminal law is exempt, but by exempting civil enforcement, it could allow for things like the FTC to go after websites for... something based on user content? It's not entirely clear how that would play out. And then it says that federal antitrust law is not covered by 230, which... seems like an odd thing to call out. I guess the focus here is on whether or not the government could make an antitrust claim on moderation choices (so, for example, if Apple or Google blocked a competitor from being in their app stores). The specifics of such a proposal would matter, so it's not clear how big a deal that might be at a first glance.
All in all, the DOJ proposal is, at the very least, craftier than Hawley's silly proposal. But when looked at together, its attempt to thread the needle of the "too much moderation/not enough moderation" debate is one that will only make things much worse, and not solve any of the actual problems online. But it may be a field day for lawsuits, and maybe that's the point: let's punish Silicon Valley with a ton of lawsuits, just because we can.
Filed Under: bias, content moderation, doj, good faith, intermediary liability, otherwise objectionable, section 230, terrorism