Congress Now Creating A Moral Panic Around Deepfakes In Order To Change CDA 230
from the oh-come-on dept
Everyone's got it out for Section 230 of the Communications Decency Act these days. And pretty much any excuse will do. The latest is that last week, Rep. Adam Schiff held a hearing on "deep fakes" with a part of the focus being on why we should "amend" (read: rip to shreds) Section 230 of the Communications Decency Act to "deal with" deep fakes. You can watch the whole hearing here, if you're into that kind of punishment:
One of the speakers was law professor Danielle Citron, who has been a long time supporter of amending CDA 230 (though, at the very least, has been a lot more careful and thoughtful about her advocacy on that then many others who speak out against 230). And she recommended changing CDA 230 to deal with deep fakes by requiring platforms take responsibility with "reasonable" policies:
Maryland Carey School of Law professor Danielle Keats Citron responded suggesting that Congress force platforms to judiciously moderate content in any changes to 230 in order to receive those immunities. “Federal immunity should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Citron said. “The current interpretation of Section 230 leaves platforms with no incentive to address destructive deepfake content.”
I have a lot of different concerns about this. First off, while everyone is out there fear mongering about the harm that deep fakes could do, it's not yet clear that the public can't figure out ways to adapt to this. Yes, you can paint lots of stories about how a deepfake could impact things, and I do think there's value in thinking through how that may play out in various situations (such as elections), to assume that deepfakes will absolutely fool people and therefore we need to paternalistically "protect" the public from possibly being fooled, seems a bit premature. That could change over time. But we haven't yet seen any evidence of any significant long term effect from deepfakes, so maybe we shouldn't be changing a fundamental internet law without actual evidence of the need.
Second, defining "reasonable moderation practices" in law seems like a very, very dangerous idea. "Reasonable" to whom? And how? And how can Congress demand reasonable rules for moderating content without violating the 1st Amendment? I don't see how any proposed solution could possibly survive constitutional scrutiny.
Finally, and most importantly, Citron is just wrong to claim that the current structure "leaves platforms with no incentive to address destructive deepfake content." As I said, I find Citron to be more thoughtful and reasonable than many critics of Section 230, but this statement is just bonkers. It's clearly false, given that YouTube has taken down deepfakes and Facebook has pulled them from algorithmic promotion and put warning flags on them. It certainly looks like the current system has provided at least some incentive for those platforms to "address destructive deepfake content." You can disagree with how these platforms have chosen to do things. Or you can claim that there need to be different incentives, but to say there are no incentives is simply laughable. There are plenty of incentives: there is public pressure (which has been fairly effective). There is the desire of the platforms not to piss off their users. There is the desire of the platforms not to continue to rain down angry rants from (and future regulations) from Congress.
And, importantly, section (c)(2) of CDA 230 is there to encourage this kind of experimentation by the platforms. They are given the benefit of not facing liability for moderation choices they make, which is actually a very strong incentive for those platforms to experiment and figure out what works best for them and their particular community.
Any effort to change the law to demand "reasonable moderation practices" is going to come up against difficult situations and create something of a mess. If we pass a law that forces Facebook to remove deepfakes, does that mean Facebook/Twitter and others would have to remove the various examples of deepfakes that are more comedic than election-impacting? For example, you may have recently seen a viral deepfake of Bill Hader on Conan O'Brien doing his Arnold Schwarzenegger impression, in which he subtly morphs into Swarzenegger. Would a "reasonable" moderation policy forbid such a thing:
Also, different kinds of sites have wholly different moderation approaches. How do you write a rule that applies equally to Facebook, Twitter, YouTube... and Wikipedia, Reddit and Dropbox. You can argue that the first three are similar enough, but the latter three work in wholly different ways. Crafting a single solution that works for all is asking for trouble -- or will wipe away significant concepts on how to run online communities.
I can completely empathize with the worries about deep fakes and what they could mean long term. But let's not use this moral panic and overreaction without evidence of harm to completely change the internet -- especially with silly claims falsely stating that there are no incentives for platforms to handle the problematic side of this technology already.
Filed Under: adam schiff, content moderation, content removals, danielle citron, deep fakes, incentives, reasonable policies
Companies: facebook, youtube