Beware Of Facebook CEOs Bearing Section 230 Reform Proposals
from the good-for-facebook,-not-good-for-the-world dept
As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google "got wrong" in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much -- and either way they will demand to know "why" individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We've already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it's literally impossible to have a perfectly moderated platform at the scale of humankind.
That said, one thing to note about these hearings is that each time, Facebook's CEO Mark Zuckerberg inches closer to pushing Facebook's vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to "punish" Facebook). That's not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.
And, for tomorrow's hearing, he's driving the knife further into 230's back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:
One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.
Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing—sometimes for contradictory reasons—that the law is doing more harm than good.
Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.
We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.
Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.
In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.
As reform ideas go, this is certainly less ridiculous and braindead than nearly every bill introduced so far. It attempts to deal with the largest concerns that most people have -- what happens when illegal, or even "lawful but awful," activity is happening on websites and those websites have "no incentive" to do anything about it (or, worse, incentive to leave it up). It also responds to some of the concerns about a lack of transparency. Finally, to some extent it makes a nod at the idea that the largest companies can handle some of this burden, while other companies cannot -- and it makes it clear that it does not support anything that would weaken encryption.
But that doesn't mean it's a good idea. In some ways, this is the flip side of the discussion that Mark Zuckerberg had many years ago regarding how "open" Facebook should be regarding third party apps built on the back of Facebook's social graph. In a now infamous email, Mark told someone that one particular plan "may be good for the world, but it's not good for us." I'd argue that this 230 reform plan that Zuckerberg lays out "may be good for Facebook, but not good for the world."
But it involves some thought, nuance, and predictions of how this plays out to understand why.
First, let's go back to the simple question of what problem are we actually trying to solve for. Based on the framing of the panel -- and of Zuckerberg's testimony -- it certainly sounds like there's a huge problem of companies not having any incentive to clean up the garbage on the internet. We've certainly heard many people claim that, but it's just not true. It's only true if you think that the only incentives in the world are the laws of the land you're in. But that's not true and has never been true. Websites do a ton of moderation/trust & safety work not because of what legal structure is in place but because (1) it's good for business, and (2) very few people want to be enabling cesspools of hate and garbage.
If you don't clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge. And of course, not every website is great at it, but that's always been a market opportunity -- and lots of new sites and services pop up to create "friendlier" places on the internet in an attempt to deal with those kinds of failures. And, indeed, lots of companies have to keep changing and iterating in their moderation practices to deal with the fact that the world keeps changing.
Indeed, if you read through the rest of Zuckerberg's testimony, it's one example after another of things that the company has already done to clean up messes on the platform. And each one describes putting huge resources in terms of money, technology, and people to combat some form of disinformation or other problematic content. Four separate times, Zuckerberg describes programs that Facebook has created to deal with those kinds of things as "industry-leading." But those programs are incredibly costly. He talks about how Facebook now has 35,000 people working in "safety and security," which is more than triple the 10,000 people in that role five years ago.
So, these proposals to create a "best practices" framework, judged by some third party, in which you only get to keep your 230 protections if you meet those best practices, won't change anything for Facebook. Facebook will argue that its practices are the best practices. That's effectively what Zuckerberg is saying in this testimony. But that will harm everyone else who can't match that. Most companies aren't going to be able to do this, for example:
Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We’ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we’ve built media-matching technology to find content that’s identical or near-identical to photos, videos, text, and audio that we’ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we’ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we’ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI’s decisions over time.
And, yes, he talks about making those rules "proportionate to platform size" but there's a whole lot of trickiness in making that work in practice. Size of what, exactly? Userbase? Revenue? How do you determine and where do you set the limits? As we wrote recently in describing our "test suite" of internet companies for any new internet regulation, there are so many different types of companies, dealing with so many different markets, that it wouldn't make any sense to apply a single set of rules or best practices across each one. Because each one is very, very different. How do you apply similar "best practices" on a site like Wikipedia -- where all the users themselves do the moderation -- to a site like Notion, in which people are setting up their own database/project management setups, some of which may be shared with others. Or how do you set up the same best practices that will work in fan fiction communities that will also apply to something like Cameo?
And, even the "size" part can be problematic. In practice, it creates so many wacky incentives. The classic example of this is in France, where stringent labor laws kick in only for companies at 50 employees. So, in practice, there are a huge number of French companies that have 49 employees. If you create thresholds, you get weird incentives. Companies will seek to limit their own growth in unnatural ways just to avoid the burden, or if they're going to face the burden, they may make a bunch of awkward decisions in figuring out how to "comply."
And the end result is just going to be a lot of awkwardness and silly, wasteful lawsuits for companies arguing that they somehow fail to meet "best practices." At worst, you end up with an incredible level of homogenization. Platforms will feel the need to simply adopt identical content moderation policies to ones who have already been adjudicated. It may create market opportunities for extractive third party "compliance" companies who promise to run your content moderation practices in the identical way to Facebook, since those will be deemed "industry-leading" of course.
The politics of this obviously make sense for Facebook. It's not difficult to understand how Zuckerberg gets to this point. Congress is putting tremendous pressure on him and continually attacking the company's perceived (and certainly, sometimes real) failings. So, for him, the framing is clear: set up some rules to deal with the fake problem that so many insist is real, of there being "no incentive" for companies to do anything to deal with disinformation and other garbage, knowing full well that (1) Facebook's own practices will likely define "best practices" or (2) that Facebook will have enough political clout to make sure that any third party body that determines these "best practices" is thoroughly captured so as to make sure that Facebook skates by. But all those other platforms? Good luck. It will create a huge mess as everyone tries to sort out what "tier" they're in, and what they have to do to avoid legal liability -- when they're all already trying all sorts of different approaches to deal with disinformation online.
Indeed, one final problem with this "solution" is that you don't deal with disinformation by homogenization. Disinformation and disinformation practices continually evolve and change over time. The amazing and wonderful thing that we're seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches. That experimentation and variety is how everyone learns and adapts and gets to better results in the long run, rather than saying that a single "best practices" setup will work. Indeed, zeroing in on a single best practices approach, if anything, could make disinformation worse by helping those with bad intent figure out how to best game the system. The bad actors can adapt, while this approach could tie the hands of those trying to fight back.
Indeed, that alone is the very brilliance of Section 230's own structure. It recognizes that the combination of market forces (users and advertisers getting upset about garbage on the websites) and the ability to experiment with a wide variety of approaches, is how best to fight back against the garbage. By letting each website figure out what works best for their own community.
As I started writing this piece, Sundar Pichai's testimony for tomorrow was also released. And it makes this key point about how 230, as is, is how to best deal with misinformation and extremism online. In many ways, Pichai's testimony is similar to Zuckerberg's. It details all these different (often expensive and resource intensive) steps Google has taken to fight disinformation. But when it gets to the part about 230, Pichai's stance is the polar opposite of Zuckerberg's. Pichai notes that they were able to do all of these things because of 230, and changing that would put many of these efforts at risk:
These are just some of the tangible steps we’ve taken to support high quality journalism and protect our users online, while preserving people’s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.
Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.
Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.
Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230—including calls to repeal it altogether—would not serve that objective well. In fact, they would have unintended consequences—harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.
We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.
That's standing up for the law that helped enable the open internet, not tossing it under the bus because it's politically convenient. It won't make politicians happy. But it's the right thing to say -- because it's true.
Filed Under: adaptability, best practices, content moderation, mark zuckerberg, section 230, sundar pichai, transparency
Companies: facebook, google