Senator Mark Warner Lays Out Ideas For Regulating Internet Platforms
from the be-careful dept
For over a year now, Senator Mark Warner has been among the most vocal in saying that it's looking like Congress may need to regulate internet platforms. So it came as little surprise on Monday when he released a draft white paper listing out "potential police proposals for [the] regulation of social media and technology firms." Unlike much of what comes out of Congress, it does appear that whoever put together this paper spent a fair bit of time thinking through a wide variety of ideas, recognizing that every option has potential consequences -- both positive and negative. That is, while there's a lot in the paper I don't agree with, it is (mostly) not in the hysterical moral panic nature found around such debates as FOSTA/SESTA.
The paper lays out three major issues that it hopes to deal with:
- Disinformation that undermines trust in our institutions, democracy, free press, and markets.
- Consumer protection in the digital age
- Antitrust issues around large platforms and the impact it may have on competition and innovation.
On a related note, we should also think carefully about how much of a problem each of the three items listed above are. I know that there are good reasons to be concerned about all three, and there are clear examples of how each one is a problem. But just how big a problem they are, and whether or not that will remain the case is important to examine. Mike Godwin has been writing an important series for us over the last few months (part 1, part 2 and part 3) which makes a compelling case that many of the problems that everyone is focused on may be the result of a bit of moral panic, overreacting to a smaller problem and not realizing how small it is.
We'll likely take more time to analyze the various policy proposals in the white paper over time, but let's focus in on the big one that everyone is talking about: the idea of opening up Section 230 again.
Make platforms liable for state-law torts (defamation, false light, public disclosure of private facts) for failure to take down deep fake or other manipulated audio/video content -- Due to Section 230 of the Communications Decency Act, internet intermediaries like social media platforms are immunized from state tort and criminal liability. However, the rise of technology like DeepFakes -- sophisticated image and audio tools that cart generate fake audio or video files falsely depicting someone saying or doing something -- is poised to usher in an unprecedented wave of false and defamatory content, with state law-based torts (dignitary torts) potentially offering the only effective redress to victims. Dignitary torts such as defamation, invasion of privacy, false light, and public disclosure of private facts represent key mechanisms for victims to enjoin and deter sharing of this kind of content.
Currently the onus is on victims to exhaustively search for, and report, this content to platforms who frequently take months to respond and who are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future. Many victims describe a "whack-a-mole" situation. Even if a victim has successfully secured a judgment against the user who created the offending content, the content in question in many cases will be re-uploaded by other users. In economic terms, platforms represent "least-cost avoiders" of these harms; they are in the best place to identify and prevent this kind of content from being propagated on their platforms. Thus, a revision to Section 230 could provide the ability for users who have successfully proved that sharing of particular content by another user constituted a dignitary tort to give notice of this judgement to a platform; with this notice, platforms would be liable in instances where they did not prevent the content in question from being re-uploaded in the future a process made possible by existing perceptual hashing technology (e.g. the technology they use to identify and automatically take down child pornography). Any effort on this front would need to address the challenge of distinguishing true DeepFakes aimed at spreading disinformation from satire or other legitimate forms of entertainment and parody.
So this seems very carefully worded and structured. Specifically, it would appear to require first a judicial ruling on the legality of the content itself, and then would require platforms to avoid having that content re-uploaded, or face liability if it were. The good part of this proposal is the requirement that the content go through a full legal adjudication before a takedown would actually happen.
That said, there are some serious concerns about this. First of all, as we've documented many times here on Techdirt, there have been many, many examples of either sketchy lawsuits that were filed solely to get a ruling on the books to try to take down perfectly legitimate content. If you don't remember the details, there were a few different variants on this, but the standard one was to file a John Doe lawsuit, then (almost immediately) claim to have identified the "John Doe" who admits to everything and agrees to a "settlement" admitting defamation. The "plaintiff" then sends this to the platforms as "proof" that the content should be taken down. If Warner's proposal goes through as is, you could see how that could become a lot more common, and you could see a series of similar tricks as well. Separately, it could potentially increase the number of sketchy and problematic defamation lawsuits filed in the hopes of getting content deleted.
One would hope that if Warner did push down this road, he would only do so in combination with a very strong federal anti-SLAPP law that would help deal with the inevitable flood of questionable defamation lawsuits that would come with it.
To his credit, Warner's white paper acknowledges at least some of the concerns that would come with this proposal:
Reforms to Section 230 are bound to elicit vigorous opposition, including from digital liberties groups and online technology providers. Opponents of revisions to Section 230 have claimed that the threat of liability will encourage online service providers to err on the side of content takedown, even in non-meritorious instances. Attempting to distinguish between true disinformation and legitimate satire could prove difficult. However, the requirement that plaintiffs successfully obtain court judgements that the content in question constitutes a dignitary tort which provides significantly more process than something like the Digital Millennium Copyright Act (DMCA) notice and takedown regime for copyright-infringing works may limit the potential for frivolous or adversarial reporting. Further, courts already must make distinctions between satire and defamation/libel.
This is all true, but it does not take into account how these bogus defamation cases may come into play. It also fails to recognize that some of this stuff is extremely context specific. The paper points to hashing technology like those used in spotting child pornography. But such content involves a strict liability -- where there are no circumstances under which it is considered legal. Broader speech is not like that. As the paper acknowledges in determining whether or not a "deepfake" is satire, much of this is likely to be context specific. And so, even if certain content may represent a tort in one context, it might not in others. Yet under this hashing proposal, the content would be barred in all contexts.
As a separate concern, this might also make it that much harder to study content like deepfakes in ways that might prove useful in recognizing and identifying faked content.
Again, this paper is not presented in the hysterical manner found in other attempts to regulate internet platforms, but it also does very little beyond a perfunctory "digital liberties groups might not like it" to explore the potential harms, risks and downsides to this kind of approach. One hopes that if Warner and others continue to pursue such a regulatory path, that much more caution would go into the process.
Filed Under: antitrust, cda 230, consumer protection, deepfakes, disinformation, free speech, intermediary liability, intermediary liability protection, mark warner, regulating social media, section 230
Companies: facebook, google, twitter