A Paean To Transparency Reports
from the encouraging-nudges-are-better-than-beatings dept
One of the ideas that comes up a lot in proposals to change Section 230 is that Internet platforms should be required to produce transparency reports. The PACT Act, for instance, includes the requirement that they "[implement] a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized." And the execrable NTIA FCC petition includes the demand that the FCC "[m]andate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers."
Any person providing an interactive computer service in a manner through a mass-market retail offering to the public shall publicly disclose accurate information regarding its content-management mechanisms as well as any other content moderation, promotion, and other curation practices of its interactive computer service sufficient to enable (i) consumers to make informed choices regarding the purchase and use of such service and (ii) entrepreneurs and other small businesses to develop, market, and maintain offerings by means of such service. Such disclosure shall be made via a publicly available, easily accessible website or through transmittal to the Commission.
Make no mistake: mandating transparency reports is a terrible, chilling, and likely unconstitutional regulatory demand. Platforms have the First Amendment right to be completely arbitrary in their content moderation practices, and requiring them to explain their thinking both chills their ability to exercise that discretion and presents issues of compelled speech, which is itself of dubious constitutionality. Furthermore, such a requirement itself threatens the moderation process on a practical level. As we are constantly reminding, content moderation at scale is really, really, hard, if not outright impossible, to get right. If we want platforms to nevertheless do the best they can, then we should leave them to be focused on that task and not encumber them with additional, and questionable, regulatory obligations.
All that said, while it is not good to require transparency reports, they are nevertheless a good thing to encourage. With Twitter recently announcing several innovations to their transparency reporting (including now having an entire "Transparency Center" to gather all released data in one place), it's a good time to talk about why.
Transparency reports have been around for a while. The basic idea has remained constant: shed light on the forces affecting how platforms host user expression. What's new is these reports providing more insight on the internal decisions bearing on how platforms do this hosting. For instance, Twitter will now be sharing data about how it has enforced its own rules:
For the first time, we are expanding the scope of this section [on rules enforcement] to better align with the Twitter Rules, and sharing more granular data on violated policies. This is in line with best practices under the Santa Clara Principles on Transparency and Accountability in Content Moderation.
This data joins other data Twitter releases about manipulative bot behavior and also the state-backed information operations it has discovered.
Which bears on one of the most important reasons to have transparency reports: they tell the public how *external* pressures have shaped how platforms can do their job intermediating their users' expression. Historically these reports have been crucial tools in fighting attacks against speech because they highlight where the attacks have come from.
In some instances these censorial pressures have been outright demands for content removal. For instance, the Twitter report calls out DMCA takedown notices, and takedown demands predicated on trademark infringement claims. It also includes other legal requests for content removal. For instance, in its latest report covering 2019, it found that
[i]n this reporting period, Twitter received 27,538 legal demands to remove content specifying 98,595 accounts. This is the largest number of requests and specified accounts that we’ve received since releasing our first Transparency Report in 2012.
But removal demands are not the only way that governments can mess with the business of intermediating user speech. One of the original purposes of these reports was to track the attempts to seek identifying information about platform users. These demands can themselves be silencing, scaring users into pulling down their own speech already made or biting their tongues going forward – even when their speech may be perfectly lawful and the public would benefit from what they have to say.
We've written many times before, quite critically, about how vulnerable speakers are to these sorts of abusive discovery demands. The First Amendment protects the right to speak anonymously, and discovery demands, that platforms find themselves having to yield to, can jeopardize that right.
As we've discussed previously, there are lots of different discovery instruments that can be propounded on a platform (ex: civil subpoenas, grand jury subpoenas, search warrants, NSLs, etc.) to demand user data. They all have different rules governing them, which affects both their propensity for abuse and the ability of the user or platform to fight off unmeritorious ones.
Transparency reports can be helpful in fighting discovery abuse because they can provide data showing how often these different instruments are used to demand user data from platforms. The problem, however, is that all too often the data in the reports is generalized, with multiple types of discovery instruments all lumped together.
I don't mean they are lumped together the way the volume of NSL letters can only be reported in wide bands. (But do note the irony that all of these Section 230 "reform" proposals mandating transparency reports do nothing about aspects of current law that actively *prevent* platforms from being transparent. If any of these proposals actually cared about the ability to speak freely online as much as they profess, their first step should be to remove any legal obstacle currently on the books that compromises speakers' First Amendment rights or platforms' ability to be protective of those rights – and the law regarding NSLs would be a great place to start.)
I mean that, for instance, multiple forms of data requests tend to get combined into one datapoint. In this aggregated form the reports have some informational value, but it obscures certain trends that are shaped by differences in each sort of instruments' rules. If certain instruments are more problematic than others, it would be helpful if we could more easily spot their impact, and then have data to cite in our advocacy against the more troubling ones.
In the case of Twitter, these "information requests" are reported as either government requests or non-government requests. For the government requests they are further broken into "emergency" and "routine," but not obviously broken out any further. On the other hand, Twitter has flagged CLOUD Act requests as something to keep an eye on when it goes into effect, as it will create a new sort of discovery instrument that may not adequately take into account the user and platform speech rights they implicate. But whether these existing government data requests were federal grand jury subpoenas, search warrants from any particular jurisdiction, NSLs, or something else is not readily apparent. Nor are the non-governmental requests broken out either, even though it might be helpful to know when the subpoena stemmed from federal civil litigation, state civil litigation, or was a DMCA 512(h) subpoena (where there may not be any litigation at all). Again, because the rules surrounding when each of these discovery instruments can be issued, and whether/how/by whom they can be resisted, differ, it would be helpful to know how frequently each is being used. Censorial efforts tend to take the path of least resistance, and this data can help identify which instruments may be most prone to abuse and need more procedural friction to be able to stem it.
It may of course not be feasible to report with more granularity, whether for such reasons such as amount of labor required or any rules barring more detailed disclosure (see, again, NSLs). And platforms may have other reasons for wanting to keep that information closer to the chest. Which, again, is a reason why mandating transparency reports, or any particular informational element that might go into a transparency report, is a bad idea. But platforms are not alone; if one is being bombarded with certain kinds of information requests then they all likely are. Transparency on these details can help us see how no platform is alone and help us all advocate for whatever better rules are needed to keep everyone's First Amendment rights from being too easily trampled by any of these sorts of "requests."
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: fcc, ntia, section 230, transparency, transparency reports
Reader Comments
Subscribe: RSS
View by: Time | Thread
From the nobody fucking cares dept
[ link to this | view in thread ]
But.
Can we do this to all the major corps? Not just the internet. And with verifiable DATA.
[ link to this | view in thread ]