China's New Internet Regulations, Building On Western Internet Regulations, Requires Algorithms To 'Vigorously Disseminate Positive Energy'
from the justifying-actual-suppression-of-ideas dept
When the UK announced its rebranded "Online Safety" bill (originally, the "Online Harms" bill) we noted that the mechanism included was effectively identical to the original Great Firewall of China. That is, when China first began censoring its internet, rather than telling websites explicitly what needed to be taken down, it just gave vague policy guidance about what "harmful" information would be a problem if it was found online, and backed that up with a serious threat: if any service provider was found not to have taken down information the government deemed problematic, it would face serious consequences. There was, of course, no such threat for taking down information that should not have been taken down. The end result was clear: when in doubt, take it down.
It remains preposterous to me that all across Western democracies, we've seen them taking the same basic approach -- insisting that platforms need to be much more aggressive in pulling down "bad" information (loosely defined), with significant liability attached to leaving it up (even if the content is legal), and little in the way of punishment for overblocking. And, over and over and over again studies have shown that when you set up a liability regime this way, you get massive overblocking. It seems that some countries see that as a feature, not a bug.
And, of course, with that approach now being picked up in other countries, China has apparently decided to ramp up its own approach. Under the guise of stopping "harmful" information online, China has released a draft of new regulations to punish internet companies that don't remove harmful information:
China kicked off a two-month campaign to crack down on commercial platforms and social media accounts that post finance-related information that’s deemed harmful to its economy.
The initiative will focus on rectifying violations including those that “maliciously” bad-mouth China’s financial markets and falsely interpret domestic policies and economic data, the Cyberspace Administration of China said in a statement late Friday. Those who republish foreign media reports or commentaries that falsely interpret domestic financial topics “without taking a stance or making a judgment” will also be targeted, it added.
Part of this new effort is a set of draft regulations for "algorithmic recommendation" systems. Kendra Schaefer wrote out a detailed Twitter thread analyzing the proposal, and just how far reaching it is.
Among the new rules (as translated by by Stanford's DigiChina center) are some... um... interesting ones.
Algorithmic recommendation service providers shall uphold mainstream value orientations, optimize algorithmic recommendation service mechanisms, vigorously disseminate positive energy, and advance the use of algorithms upwards and in the direction of good.
Algorithmic recommendation service providers may not use algorithmic recommendation services to engage in activities harming national security, upsetting the economic order and social order, infringing the lawful rights and interests of other persons, and other such acts prohibited by laws and administrative regulations. They may not use algorithmic recommendation services to disseminate information prohibited by laws and administrative regulations.
It limits what kinds of keywords can be used. "Harmful" information cannot be used, nor "biased user tags" (whatever that means):
Algorithmic recommendation service providers shall strengthen user model and user tagging management and perfect norms for logging interests in user models. They may not enter unlawful or harmful information as keywords into user interests or make them into user tags to use them as a basis for recommending information content, and may not set up discriminatory or biased user tags.
There are also rules against using algorithms to manipulate systems -- but also to "shield information," which seems to contradict the requirements to block lots of information.
Algorithmic recommendation service providers may not use algorithms to falsely register users, illegally trade accounts, or manipulate user accounts; or for false likes, comments, reshares, web page navigation, etc.; or to carry out flow falsification or flow hijack. They may not use algorithms to shield information, over-recommend, manipulate topic lists or search result rankings, or control hot search terms or selections and other such interventions in information presentation; or to carry out self-preferencing, improper competition, influence on online public opinion, or evasion of supervision and management.
That one about "influencing public opinion" sure is interesting. How do you avoid influencing public opinion?
Like many recent regulatory proposals in the US, Canada, the EU and elsewhere, China leans heavily on "transparency" when it comes to algorithms:
Algorithmic recommendation service providers shall notify users in a clear manner about the situation of the algorithmic recommendation services they provide, and publicize the basic principles, purposes and motives, operational mechanisms, etc., of the algorithmic recommendation services in a suitable manner.
But, of course, viewing this in the Chinese context shows why such mandatory transparency has risks. In this case, China wants this transparency so it can further regulate what information people will see via algorithmic recommendations, and to intimidate/threaten companies not to spread information the government wants suppressed.
Again, similar to various internet regulations in the West, China has a nod towards end-user control and empowerment:
Algorithmic recommendation service providers shall provide users with a choice to not target their individual characteristics, or provide users with a convenient option to switch off algorithmic recommendation services. Where users choose to switch off algorithmic recommendation services, the algorithmic recommendation service provider shall immediately cease providing related services.
Algorithmic recommendation service providers shall provide users with functions to choose, revise, or delete user tags used for algorithmic recommendation services.
Where users believe algorithmic recommendation service providers use algorithms in a manner creating a major influence on their rights and interests, they have the right to require the algorithmic recommendation service provider to give an explanation and adopt related measures to improve or remedy the situation.
But, again, when viewed in the Chinese context, you can easily see how this kind of mandate can be abused heavily.
There is also, a "think of the children" provision, because no internet regulations these days are complete without such a moral panic heart string pull:
Where algorithmic recommendation service providers provide services to minors, they shall fulfill duties for the online protection of minors according to the law, and make it convenient for minors to obtain information content beneficial to their physical and mental health, through developing models suited for use with minors, providing services suited to the specific characteristics of minors, etc.
Algorithmic recommendation service providers may not push information content toward minor users that may incite the minor to imitate unsafe conduct, or acts violating social morals, or lead the minor towards harmful tendencies or may influence minors’ physical and mental health in other ways; and they may not use algorithmic recommendation services to lead minors to online addiction
Also, there's a built in complaint mechanism.
Algorithmic recommendation service providers shall accept social supervision, set up convenient complaints and reporting interfaces, and promptly accept and handle complaints and reports from the public.
Algorithmic recommendation service providers shall establish user appeals channels and mechanisms, to standardize the handling of user appeals and the timely provision of feedback, and realistically ensure the lawful rights and interests of users.
Failing to abide by these rules will get the companies fined (relatively small amounts -- about $1k to $5k -- at first). However, it also opens them up to significant criminal liability as well.
As Schaefer notes, this is China "going beyond" the internet regulations in the EU. But, what's left unsaid is that some of this is enabled by just how far the EU and others have gone in trying to get the internet to paper over the societal problems created by government failures in other realms.
It should be leading us to wonder why it is that China is so eager to embrace and extend this approach to internet regulations, rather than as an endorsement of such an approach.
Filed Under: algorithms, china, intermediary liability, internet regulations, online harms, online safety