Governments And Internet Companies Agree On Questionable Voluntary Pact On Extremist Content Online
from the well-meaning-but-misguided dept
Yesterday, there was a big process, called the Christchurch Call, in which a bunch of governments and big social media companies basically agreed to take a more proactive role in dealing with terrorist and violent extremist content online. To its credit, the effort did include voices from civil society/public interest groups that raised issues about how these efforts might negatively impact freedom of expression and other human rights issues around the globe. However, it's not clear that the "balance" they came to is a good one.
A free, open and secure internet is a powerful tool to promote connectivity, enhance social inclusiveness and foster economic growth.
The internet is, however, not immune from abuse by terrorist and violent extremist actors. This was tragically highlighted by the terrorist attacks of 15 March 2019 on the Muslim community of Christchurch – terrorist attacks that were designed to go viral.
The dissemination of such content online has adverse impacts on the human rights of the victims, on our collective security and on people all over the world.
The "Call" is not binding on anyone. It's just a set of "voluntary commitments" to try to "address the issue of terrorist and violent extremist content online and to prevent the abuse of the internet...." There are a set of commitments from governments and a separate set from social media companies. On the government side the commitments are:
Counter the drivers of terrorism and violent extremism by strengthening the resilience and inclusiveness of our societies to enable them to resist terrorist and violent extremist ideologies, including through education, building media literacy to help counter distorted terrorist and violent extremist narratives, and the fight against inequality.
Ensure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content, in a manner consistent with the rule of law and international human rights law, including freedom of expression.
Encourage media outlets to apply ethical standards when depicting terrorist events online, to avoid amplifying terrorist and violent extremist content.
Support frameworks, such as industry standards, to ensure that reporting on terrorist attacks does not amplify terrorist and violent extremist content, without prejudice to responsible coverage of terrorism and violent extremism.
Consider appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content, including through collaborative actions, such as:
- Awareness-raising and capacity-building activities aimed at smaller online service providers;
- Development of industry standards or voluntary frameworks;
- Regulatory or policy measures consistent with a free, open and secure internet and international human rights law.
That mostly seems to stop short of demanding content be taken down, though that last point teeters on the edge. On the social media side, there is the following list of commitments:
Take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms. Cooperative measures to achieve these outcomes may include technology development, the expansion and use of shared databases of hashes and URLs, and effective notice and takedown procedures.
Provide greater transparency in the setting of community standards or terms of service, including by:
- Outlining and publishing the consequences of sharing terrorist and violent extremist content;
- Describing policies and putting in place procedures for detecting and removing terrorist and violent extremist content.
Enforce those community standards or terms of service in a manner consistent with human rights and fundamental freedoms, including by:
- Prioritising moderation of terrorist and violent extremist content, however identified;
- Closing accounts where appropriate;
- Providing an efficient complaints and appeals process for those wishing to contest the removal of their content or a decision to decline the upload of their content.
Implement immediate, effective measures to mitigate the specific risk that terrorist and violent extremist content is disseminated through livestreaming, including identification of content for real-time review.
Implement regular and transparent public reporting, in a way that is measurable and supported by clear methodology, on the quantity and nature of terrorist and violent extremist content being detected and removed.
Review the operation of algorithms and other processes that may drive users towards and/or amplify terrorist and violent extremist content to better understand possible intervention points and to implement changes where this occurs. This may include using algorithms and other processes to redirect users from such content or the promotion of credible, positive alternatives or counter-narratives. This may include building appropriate mechanisms for reporting, designed in a multi-stakeholder process and without compromising trade secrets or the effectiveness of service providers’ practices through unnecessary disclosure.
Work together to ensure cross-industry efforts are coordinated and robust, for instance by investing in and expanding the GIFCT, and by sharing knowledge and expertise.
Facebook put up its own list of actions that it's taking in response to this, but as CDT's Emma Llanso points out, it's missing some fairly important stuff about making sure these efforts don't lead to censorship, especially of marginalized groups and individuals:
Missing from @facebook @microsoft @google @twitter @amazon Nine Point Plan for implementing #ChristchurchCall is any kind of clear commitment to evaluate how their moderation efforts disproportionately silence certain groups & individuals https://t.co/Gp1EQWHJAG
— Emma Llanso (@ellanso) May 15, 2019
In response to all of this, the White House refused to join with the other countries who signed on to the voluntary commitments of the Christchurch Call, noting that it had concerns about whether it was appropriate and consistent with the First Amendment. That's absolutely accurate and correct. Even if the effort is voluntary and non-binding, and even if it makes references to protecting freedom of expression, once a government gets involved in advocating for social media companies to take down content, it's crossing a line. The Washington Post quoted law professor James Grimmelmann, who makes this point concisely:
“It’s hard to take seriously this administration’s criticism of extremist content, but it’s probably for the best that the United States didn’t sign,” said James Grimmelmann, a Cornell Tech law professor. “The government should not be in the business of ‘encouraging’ platforms to do more than they legally are required to — or than they could be required to under the First Amendment.”
“The government ought to do its ‘encouraging’ through laws that give platforms and users clear notice of what they’re allowed to do, not through vague exhortations that can easily turn into veiled threats,” Grimmelmann said.
And he's also right that it's difficult to take this administration's position seriously, especially given that the very same day that it refused to join this effort, it was also pushing forward with its sketchy plan to force social media companies to deal with non-existent "conservative bias." So, on the one hand, the White House says it believes in the First Amendment and doesn't want governments to get involved, and at the very same time, it's suggesting that it can pressure social media into acting in a way that it wants. And, of course, this is also the same White House, that has made other efforts to get social media companies to remove content from governments they dislike, such as Iran's.
So, yes, we should be wary of governments telling social media companies what content should and should not be allowed, so it's good that the White House declined to support the Christchurch Call. But it's difficult to believe it was doing so for any particularly principled reasons.
Filed Under: censorship, christchurch, christchurch call, extremism, free speech, human rights, social media, terrorist content, voluntary, white house
Companies: facebook, google, microsoft, twitter, youtube