Could A Narrow Reform Of Section 230 Enable Platform Interoperability?
from the one-approach dept
Perhaps the most de rigeur issue in tech policy in 2020 is antitrust. The European Union made market power a significant component of its Digital Services Act consultation, and the United Kingdom released a massive final report detailing competition challenges in digital advertising, search, and social media. In the U.S., the House of Representatives held an historic (virtual) hearing with the CEOs of Amazon, Apple, Facebook, and Google (Alphabet) on the same panel. As soon as the end of this month the Department of Justice is expected to file a “case of the century” scale antitrust lawsuit against Google. One competition policy issue that I’ve written about extensively is interoperability, and, while we’ve already seen significant proposals to promote interoperability, notably the 2019 ACCESS Act, I want to throw another idea into the hopper: I think Congress should consider amending Section 230 of the Communications Act to condition its immunity for large online intermediaries on the provision of an open, raw feed for independent downstream presentation.
I know, I know. I can almost feel your fingers hovering over that big blue “Tweet” button or the “Leave a Comment” link -- but please, hear me out first.
For those not already aware of (if not completely sick of) the active discussions around it, Section 230, originally passed as part of the Communications Decency Act, is an immunity provision within U.S. law intended to encourage internet services to engage in beneficial content moderation without fearing liability as a consequence of such action. It’s famously only 26 words long in its central part, so I’ll paste that key text in full: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
I’ll attempt to summarize the political context. Section 230 has come under intense, bipartisan criticism over the past couple of years as a locus of animosity related to a diverse range of concerns with the practices of a few large tech companies, in particular. Some argue that the choices made by platform operators are biased against conservatives; others argue that the platforms aren’t responsible enough and aren’t held sufficiently accountable. The support for amending Section 230 is substantial, although it is far from universal. The current President has issued an executive order seeking to catalyze change in the law; and the Democratic nominee has in the past bluntly called for it to be revoked. Members of Congress have introduced several bills that touch Section 230 (after the passage of one such bill, FOSTA-SESTA, in 2018), such as the EARN IT Act which would push internet companies to do more to respond to online child exploitation, to the point of undermining secure encryption. A perhaps more on-point proposal is the PACT ACT, which focuses on specific platform content practices; I’ve called it the best starting point for Section 230 reform discussions.
Why is this one, short section of law so frequently used as a political punching bag? The attention goes beyond its hard law significance, revealing a deeper resonance in the modern-day notion of “publishing”. I believe this law in particular is amplified because the centralization and siloing of our internet experience has produced a widespread feeling (or reality) of a lack of meaningful user agency. By definition, social media is a business of taking human input (user generated content) and packaging it to produce output for humans, doubling the poignancy of human agency in some sense. The user agency gap spills over from the realm of competition, making it hard to evaluate content liability and privacy harms as entirely independent issues. In so many ways, the internet ecosystem is built on the idea of consumer mobility and freedom; also in so very many ways, that idea is bankrupt today.
Yet debating whether online intermediaries for user content are “platforms” or “publishers” is a distraction. A more meaningful articulation of the underlying problem, I believe, is to say that we end users are unable to customize sufficiently the way in which the content is presented to us because we are locked into a single experience.
Services like Facebook and YouTube operate powerful recommendation engines that are designed to sift through vast amount of potentially-desirable content and present the user with what they most value. This content is based on individual contextual factors such as what the user has been watching, and the broader signals of desirability such as engagement level from other users. As many critics allege, the underlying business model of these companies benefits by keeping users as engaged as possible, spending as much time on the platform as possible. That means recommending content that gets high engagement, even though human behavior doesn’t equate positive social value with high engagement (that’s the understatement of the day, there!).
One of the interesting technical questions is how to design such systems to make them “better” from a social perspective. It’s the subject of academic research, in addition to ample industry investment. I’ve given YouTube credit in the past for offering some amount of transparency into changes it’s making (and the effects of those changes) to improve the social value of its recommendations, although I believe making that transparency more collaborative and systematic would help immensely. (I plan to expand on that in my next post!).
Recommendation engines remain by and large black boxes to the outside world, including the users who receive their output. No matter how much credit you give individual companies for their efforts to balance properly their business model demands, optimal user experience, and improving social value, there are fundamental limits on users’ inability to customize, or replace, the recommendation algorithm that mediates the lion’s share of their interaction with the social network and the user-generated content that it hosts. We also can’t facilitate innovation or experimentation with presentation algorithms as things stand due to the lack of effective interoperability.
And that’s why Section 230 gets so much attention -- because we don’t have the freedom to experiment at scale with things like Ethan Zuckerman’s Gobo.social project and thus improve the quality of, and better control, our social media experiences. Yes, there are filters and settings that users can change to customize their experience to some degree, likely far more than most people know. Yet, by design, these settings do not provide enough control to affect the core functioning of the recommendation engine itself.
Thus, many users perceive the platforms to be packaging up third party, user generated content and making conscious choices of how to present it to us -- choices that our limited downstream controls are insufficient to manage. That’s why it feels to some like they’re “publishing,” and doing a bad job of it at that. Despite massive investments by the service operators, it’s not hard to find evidence of poor outcomes of recommendations; see, e.g., YouTube recommending videos about upcoming civil war. And there are also occasional news stories of willful actions making things worse to add more fuel to the fire.
So let’s create that space for empowerment by conditioning the Section 230 immunity on the provision of more raw, open access to their content experience so users can better control how to “publish” it to themselves by using an alternative recommendation engine. Here’s how to scale and design such an openness requirement properly:
-
Apply an openness requirement only where the problems described above apply, which is for services that primarily host and present social, user generated content.
-
Limit an openness requirement to larger platforms, for example borrowing the 100 million MAUs (Monthly Active Users) metric from the Senate’s ACCESS Act.
-
Design the requirement to be variable across different services, and to engage platforms in the process. The kinds of APIs that Facebook and YouTube would set up to make this concept successful would be quite different.
-
Allow platforms to adopt reasonable security and privacy access controls for their provisioned APIs or other interoperability interfaces.
-
Preserve platform takedowns of content and accounts upstream of any provisioned APIs or other interoperability interfaces, to take advantage of scale in responding to Coordinated Inauthentic Behavior (CIB).
-
Encourage platform providers to allow small amounts of API/interoperability interface access for free, while permitting them to charge fair, reasonable, and nondiscriminatory rates to third parties operating at larger scale.
Providing this kind of openness downstream would create opportunities for innovation and experimentation with recommendation engines at a scale never before seen. This is not just an evolutionary step forward in what we think of as internet infrastructure; it’s also a roadmap to sustainable alternative business models for the internet ecosystem. Even assuming that many users would stick with the platform’s default experience and the business model underlying it, for those who choose to change, they’d have a true business model choice, and a deep, meaningful user experience choice at the same time.
I recognize that this is a thumbnail sketch of a very complex idea, with much more analysis needed. I publish these thoughts to help illustrate the relationship between agita over Section 230 and the concentrated tech ecosystem. The centralized power of a few companies and their recommendation engines doesn’t provide sufficient empowerment and interoperability, thus limiting the perception of meaningful agency and choice. Turning this open feed concept into a legal and technical requirement is not impossible, but I recognize it would carry risk. In an ideal world, we’d see the desired outcome - meaningful downstream interoperability including user substitutability of recommendation engines - offered voluntarily. That would avoid the costs and complexities of regulation, put platforms in a position to strike the right balance, and release a political pressure relief valve to keep the central protections of Section 230 intact. Unfortunately, the present day market and political realities suggest that may not occur without substantial regulatory pressure.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, competition, interoperability, section 230
Reader Comments
Subscribe: RSS
View by: Time | Thread
Would not have the anticipated effect
This would not change how platforms see fit to filter the content hosted on their platforms. They would still be free to delete any content they like thus the suggested change above would have no effect on the complaints of the vocal moron contingent.
Being able to further filter the content you see is always a good thing but making the sale of apples a requirement to sell oranges is silly.
[ link to this | view in chronology ]
Would this actually work?
I should start out by saying that I appreciate Chris' willingness to explore these issues and try to come up with more nuanced proposals for dealing with various challenges in the internet space, but this one does not convince me.
I'm obviously a big proponent of increasing interoperability as a method of increasing competition, but I have huge concerns about using 230 as the lever to do so. It seems backwards to me in many ways. Also, I'm not clear on how this proposal actually solves the complaints that many have with content moderation -- in particular the requirement of preserving content takedowns upstream -- still leaves the original platform as the defactor single source provider, and greatly limits the ability of other platforms to compete on their content moderation efforts.
To me, opening up content moderation to a more competitive market is a key benefit of interoperability. But this proposal would lead to a world where the originating platform still has tremendous pressure to overmoderate, and the downstream providers then can only moderate over what's left. That seems less than optimal.
My other concern is with the "only applies to big providers" rule. I know those are increasingly popular, but I worry about the downside unintended consequences, and the kinds of games that companies will play to deal with that. Perhaps it's not an issue if the threshold is set so high (100m MAU), because a company that big can almost always deal with the issues... but we don't know that, and other laws that have size thresholds almost always lead to weird gaming situations to avoid going over the threshold -- and we should be careful of just tossing those in without real evidence that it works.
And, just in general, I tend to think that if we're putting in place laws that say "only applies to companies of x size" that's a red flag that the law itself is a problem, and hopefully there would be a more elegant solution that applies across the board.
[ link to this | view in chronology ]
Re: Would this actually work?
Perhaps there is a way to accommodate everyone here -- The originating platform can moderate whatever it wants that is viewed directly through its own platform, but only hides and does not delete such moderated content. It then passes everything onto the downstream providers. People who trust the originating platform's operation continue viewing as always; people who want more removed can choose a downstream provider that does so; those who want to see everything can choose a downstream provider that does so.
[ link to this | view in chronology ]
Re: Re: Would this actually work?
You would force these companies to host speech they do not wish to host. To do this you would first have to revoke the 1st Amendment.
[ link to this | view in chronology ]
Re: Re: Re: Would this actually work?
Actually, I think Koby's on to something for once. Such a system (if not mandated) could allow the original provider to take down anything they find truly objectionable (or illegal) and just hide the edge cases. This could give them the benefits of moderation with none of the downside. "We didn't delete your video, go use X app and you can watch it all you want" is a pretty solid defense against moderation complaints.
In fact, I suspect that such a system would be the natural result.
[ link to this | view in chronology ]
Re: Re: Would this actually work?
Will have to think about it more, but that could potentially help, particularly with regard to moderated content (posts etc); there's more content overhead to be managed downstream, but that's not too much. We'd probably have to have account takedowns by the platform be preserved i.e. not managed in some shadowy space and allowed to reappear, and that's where most of the good work I want to preserve happens anyway.
[ link to this | view in chronology ]
Re: Would this actually work?
Thoughtful reply, of course. One thing I probably should have included in the original piece is an observation that this very much isn't a complete solution. I thought of it as a complement to continuing to advance the Santa Clara principles and trying to make things like the Facebook Oversight Board effective - noting that those two frameworks apply mostly to the problem of overmoderation, but say little (if anything?) about undermoderation, or the more subtle domain of recommendations/prioritization. So for sure, pushing for interoperability is not a solution to overmoderation - however I note the comment below about encouraging a downstream feed to include moderated-out content in some "hidden" fashion, which would help with that a bit. (I still believe that account takedowns need to be preserved downstream, or conceptually this entire system breaks in my mind.)
[ link to this | view in chronology ]
Kill it with fire? Yes, Kill it with fire!
So what makes this dangerous is that it is a crack in the armor -- a wedge that opens 'publishing' up to something that it isn't. A 3rd party's speech that a machine has unknowingly parroted. Context is king and there is no way around that fact. The only thing you've done as made a site 'more liable' for something someone else said and using a political wedge to justify the point... It won't stop there, it never stops there... You'd then have 3rd parties parroting infringing content (DMCA still applies), and what happens if another arbiter site gets over x MAU? They have to pass on their data as well? ... and now you've a right big mess on your hands.
Novel idea... and better than anything those idiot politicians have come up with... but I still think the idea should be killed with fire.
[ link to this | view in chronology ]
And now I’m reminded of that one dope who believed he could sue Alphabet for defamation because it “republished” someone else’s defamatory content via linking to said content on the Google search engine. (Christ, what an asshole.) This might not be the same thing you’re talking about, but I wager it’s kinda close.
[ link to this | view in chronology ]
Re:
Such cases have prevailed in other countries, yet somehow Google still operates in Australia.
[ link to this | view in chronology ]
Re: Re:
[citation needed]
[ link to this | view in chronology ]
Re: Kill it with fire? Yes, Kill it with fire!
The question of whether a company is acting as a publisher rather than a mere pipeline should not depend on their use of technology for two reasons. Firstly, the impact of their actions is the same regardless of how they choose to make those actions (is a horde of minimum wage Filipinos who barely speak English following strict rules any better than an AI?). Secondly, even if there is a human in the loop, they'll tend to use automated assistance, and companies which are mostly automated or user-driven tend to use manual interventions from time to time, so it is difficult to see how a practical threshold could be legislated that was immune to cheating.
[ link to this | view in chronology ]
One problem is that third party filter sites will need large storage and processing capabilities, and would likely end up relying on Google, Amazon or Microsoft to provide this. Note a large driver of both storage and processing requirement will be the Implementation of the filters, especially if filtering images and sound. Also, storage will be required for at least record ID's and filter classifications shadowing the source sites, as that reduces the processing required if every record is filtered time is is passed on to any user.
If the downstream is to run recommendation engines, then it has to duplicate the upstream, so it has access to everything posted to feed its recommendation engine. Also, having content delivered via a third party breaks the services advertising delivery model, especially if it is delivering large amounts of content from its own cache.
The other thing to consider is the complete experience of a service. For example YouTube has dominated video delivery because of its design, and how it allows channels to present themselves. Also, dose the downstream send the Information useful for creator analysis back to YouTube, such a how much was viewed etc.
Your suggestions are more complex that defining APIs, as what makes a site attractive to users is presentation, search tools etc.
[ link to this | view in chronology ]
Re:
Fully get these points. I'm more comfortable with relying on cloud infrastructure provided by big companies, there are clear scale benefits there and also there's meaningful competition and substitutability. But still, it's not a trivial exercise.
Your point about complete experience is absolutely right. But I think there's room for companies, even smaller ones, to do that well. It took Linux distro's many years to build up good UX, but they did, fairly well I'd say.
Re: engagement data - like does the downstream have to share info back up with YT (and for that matter how much of that needs to be made available in the first place) - I haven't gotten that far yet, it's another good question!
[ link to this | view in chronology ]
Re: Re:
Possibly the biggest problem, how do the upstream and downstream both make money?
[ link to this | view in chronology ]
I think a lot of aspects are interesting, whether they are to be legally enforced or not. However, i don't see why any of it has to be attached to Section 230 in any way.
I mean, i get it. The suspicion is that someone is going to screw up 230 badly no matter what, and soon, so "pragmatists" opt to promote something which is theoretically slightly less awful to dilute the truly awful.
On the other hand, one could just let the bozos in Congress and the White House pass any of the incredibly bad laws they want, which will certainly be struck down as unconstitutional five times over inside a couple months. Because they aren't even trying.
[ link to this | view in chronology ]
The bozos in Congress are delaying the consideration of the bad laws at the moment, because they're too busy confirming judges who won't strike down the bad laws.
[ link to this | view in chronology ]
"many users perceive the platforms to be packaging up third party, user generated content and making conscious choices of how to present it to us -- choices that our limited downstream controls are insufficient to manage."
A bit deep to follow, and seems to wonder on the definition of Some things.
3rd party,
This is fun to explain. 3rd party is the Consumer posting, the Advertiser, and Anyone that can Add/manipulate the site by some Input.
Compare the Search engines Google and Bing, and look at the top on each for the adverts.
These can be influenced Many ways, and Google has been fighting that ALLOT. If you dont pay Google, you Shouldnt be on the top. And Bing has a ton of them.
As to Forums, Chats, SHARING privately. There seems to be A sort of control problem on all sides. Who and what to restrict, then Who is complaining about the posts, and if you dont allow Some things to happen, Some things will never be seen or found (trafficking, Prostitution, other illegal things).
Then lets look at Politics, and the idea of Placing a Face on every channel that you want to reinforce Who to elect. Or adverts that want to Point at a product, that they want to sell, and you see Tons of adverts for a New/Old product.
With all of this, the net would LOVE balance, and for that balance to Show the most/best popular things. Sites that get the most comments, Sites that have the best prices, sites that have the best news, opinions, this and that.. But its not easy, as Some have learned how to Fudge the system.(as has alwasy been done)
Now we get the Gov. involved about who/what/when, and its feeling like they are trying to force something they dont quite understand. How HUGE is the interference, and the amounts of data trying to balance things. What makes alibaba and Banggood worth shopping with??
[ link to this | view in chronology ]
I don't trust politicians to fix or change section 230 in any positive way, see Fosta etc
The ideal outcome is to leave it exactly as it is.
It's very simple section 230 protects Web services or websites from ramdom legal actions it says if you wish to sue someone sue the user who wrote it.
What will probably happen we will Fosta version 2,
Websites will lose section 230 if they use end to end encrption or do not provide a way for police or
the courts to read users private messages.
They will go after Google and Facebook first but the law will apply to any website that has user uploads or comments on it or allows users to send
messages to each other.
Look at politicians in the EU they voted for laws that mandate every website to install filters to check every image, and video and audio file
to check if it infringes on any ip.
And remember when new laws come in they are being pushed by old legacy company's that cannot
Compete with Google or Facebook and would like to turn the Web into some version of cable TV
with streaming services owned by big corporations like Att
[ link to this | view in chronology ]
A few thoughts..
You note that one of the major issues with internet usage today is major siloing. You fail to show anything that suggests this siloing is platform driven rather than user driven. Your remaining recommendations discuss ways to give users more control over recommendations.. which could only compound the siloing issue if it's user driven.
The recommendation algorithms are basically the core of these businesses and a big part of what differentiates them from competitors. A search site is basically a bunch of expensive support functionality built around a recommendation algorithm and a monetized (ie Ad driven usually) front end and other user content driven platforms aren't really much different. Asking to be fully transparent for your recommendation algorithm is like asking Coke to tell us their recipe.
Forcing them to provide access to the underlying data means nothing more than taking on the expensive support costs for whatever others want to use it for instead of for your own business. Instead of looking at how to build success it's looking at something already successful and trying to divy it up without completely collapsing it.
I see a veiled power grab like most of these requests and I think you are basically getting tricked into hamstringing the new guard (moderately powerful, progressive tech companies with smaller lobbying budgets) in order to keep the power balance in the hands of the old guard (significantly more powerful, regressive companies with huge lobbying budgets being disrupted by tech companies).
We won't see anyone recommending we force walmart to let small companies use wallmart's infrastructure for whatever they want.
[ link to this | view in chronology ]
Re: A few thoughts..
"You fail to show anything that suggests this siloing is platform driven rather than user driven"
This is where I am on the issue. The bottom line is that the reasons why these services tend toward those silos is because people are fundamentally lazy. They know they have a wide range of social media to subscribe to, but few choose more than a handful to use on any kind of regular basis. Unless they get a compelling reason to switch, as they did when Facebook got people to move from MySpace or the shiny new thing of TikTok that attracted people during lockdowns, they'll stick with what they know.
They know they have a wide range of shopping options, but many will default to just buying everything from Amazon even if they understand that they'll get better deals by shopping around. They know there's a bunch of streaming options available, but they'll default to Netflix first when browsing for something to watch, even if they do have other options subscribed. They know they have a range of cloud service platforms to choose from, but they already have AWS set up so they fire that up instead of looking to find out that Azure is cheaper, and so on.
Removing the "silos" doesn't change this behaviour. All it means is that when they access whichever entry point they default to, they get a wider range of things to access on the backend. Which, in some ways, could be even worse for smaller competitors, as it means they have less options to monetise the interactions on their platforms, while the big boys hoover up more money.
It might sound good to change the rules to force certain types of behaviour from the platforms, but unless user behaviour changes as well it probably won't have the desired effect. Especially since section 230 is already such a simple rule that already allows platforms to interact with each other, unless I'm missing some kind of ruling that suddenly makes them liable for things when accessing other platforms that they're not liable for on their own platforms..
[ link to this | view in chronology ]
Re: Re: A few thoughts..
I don't think it's really laziness, there is a natural barrier to entry when it comes to social media services; you want to use a platform that your friends are using or that the most people are using if you are targeting the general public. It may be worth looking at, but really there seems to be decent competition in this area. The competition trouble is a lot simpler in that they are letting facebook buy up the whatsups and the instagrams of the world rather than anything fundamental making it so they don't exist in the first place.
I just don't think the silo troubles we are having are with are the platforms.
I see it more of an issue that people don't like to have their beliefs challenged, so they they build their own info silos that are platform independent.
[ link to this | view in chronology ]
Allow platforms to adopt reasonable security and privacy access controls for their provisioned APIs or other interoperability interfaces.
I'm unsure that this statement has any meaning in reality. Correct me if I'm missing something, but any reasonable recommendation engine would have to have access to the underlying data. The only way around this that I can see would be for the original platform to annotate every post with content tags and send only the annotation metadata to the engine... but this:
1) fails to meaningfully address privacy concerns, since for the metadata to be useful it would have to reveal substantially everything anyway and
2) simply keeps the original platform in the moderation drivers seat, as everyone else is entirely reliant on what tags they are willing to apply to which posts.
...or I guess you could force facebook to implement these engines on their own systems... but that's a security nightmare, expect a breach every other day.
[ link to this | view in chronology ]
I'm reminded of when Ars Technica hosted an opinion piece from Oracle's lawyer about how not rewriting decades of copyright law would upend the software industry.
[ link to this | view in chronology ]
Why?
What is the compelling government interest driving this regulation? It sounds like you want to use the power of the government to interfere in the market because you don't particularly like how social media is working right now. IMO this is not nearly a good enough reason.
[ link to this | view in chronology ]