The Flipside To Figuring Out What Content Do You Block: Cloudflare's Project Galileo Focuses On Who It Should Protect
from the case-studies dept
There has been so much discussion lately about the impossibility of doing content moderation well, but it's notable that the vast majority of that discussion focuses on what content to ban or to block entirely. I do wish there was more talk about alternatives, some of which already exist (from things like demonetization to refusing to algorithmically promote -- though, for the most part, these solutions just seem to annoy people even more). But there is something of a flipside to this debate which applies in perhaps somewhat more rare circumstances: what content or speakers to specifically protect.
I'm thinking of this, in particular, as Cloudflare has announced the 5th anniversary of its (until now, mostly secretive) Project Galileo offering, in which the company provides free security to around 600 organizations which are likely targets of attacks from well resourced attackers:
Through the Project, Cloudflare protects—at no cost—nearly 600 organizations around the world engaged in some of the most politically and artistically important work online. Because of their work, these organizations are attacked frequently, often with some of the fiercest cyber attacks we’ve seen.
Since it launched in 2014, we haven't talked about Galileo much externally because we worry that drawing more attention to these organizations may put them at increased risk. Internally, however, it's a source of pride for our whole team and is something we dedicate significant resources to. And, for me personally, many of the moments that mark my most meaningful accomplishments were born from our work protecting Project Galileo recipients.
The promise of Project Galileo is simple: Cloudflare will provide our full set of security services to any politically or artistically important organizations at no cost so long as they are either non-profits or small commercial entities. I'm still on the distribution list that receives an email whenever someone applies to be a Project Galileo participant, and those emails remain the first I open every morning.
At a first glance, this might not seem like much of a story at all: internet company does something good to protect those at risk doesn't necessarily seem that interesting at first, especially during a moment in time when everyone is so focused on attacking every internet company for bringing about all the evils of the world. However, I do think there are some very important lessons to be learned here, and some of them very much apply to the debates about content moderation. In some sense, Project Galileo is like the usual content moderation debates, but in reverse.
I was particularly interested in how Cloudflare chose which organizations to protect, and spoke with the company's CEO, Matthew Prince last week to get a more in-depth explanation. As he explained, they partnered up with a wide variety of trustworthy organizations (including EFF, Open Technology Institute, the ACLU, Access Now, CDT, Mozilla, Committee to Protect Journalists and the Freedom of the Press Foundation, among others), and would let those organizations nominate organizations which might be at risk or if organizations approached Cloudflare about being included in Project Galileo, Cloudflare could run their application by those trusted partners. What started with 15 partner organizations has now nearly doubled to 28.
Of course, such a system likely wouldn't work well in the other direction (figuring out what accounts to ban or otherwise punish) as people would undoubtedly flip out and attack them -- as many did a few years ago when Twitter announced its Trust and Safety Council of partner organizations that it relied on for advice on how it handled its trust and safety questions. Many critics of Twitter and its policies have continued to falsely insist that the organizations in this list are some sort of Star Chamber making decisions on who is allowed to use Twitter and who is not -- so any move to actually have such a system in place would likely be met with resistance.
However, there is something interesting about having a more thorough process involving outside experts, than just trusting a company to make these decisions entirely internally. It's obviously somewhat different with Cloudflare, in part because it's providing underlying security services that are not as upfront as the various social media sites, and also because it's about picking out who to "protect" rather than who to block. But it is worth looking at and thinking about all of the different challenges there are when it comes to content moderation that go beyond what most people normally talk about.
For what it's worth, this is also quite important as more and more politicians around the globe are gearing up to "regulate" content moderation in one way or another. It's one thing to say that social media sites should be required by law to block certain accounts (or to not block certain accounts), but think about how any of those laws might also apply to services like Project Galileo, and you can see why there should be caution in rushing in with regulatory solutions. The approach taken with something like Project Galileo ought to be entirely different than the process of determining whether or not a platform has decided to remove Nazi propagandists. But it's doubtful that those proposing new regulations are thinking that far ahead, and I worry that some new proposals may sweep up Project Galileo in a manner where it may become more difficult for Cloudflare to continue to run such a program.
Still, in this era when everyone is so focused on the bad stuff online and how to stop it, it's at least worth acknowledging a cool project from Cloudflare to note the good stuff online and how to protect it.
Filed Under: computer security, content moderation, project galileo, protection, security
Companies: cloudflare