China's New Internet Regulations, Building On Western Internet Regulations, Requires Algorithms To 'Vigorously Disseminate Positive Energy'
from the justifying-actual-suppression-of-ideas dept
When the UK announced its rebranded "Online Safety" bill (originally, the "Online Harms" bill) we noted that the mechanism included was effectively identical to the original Great Firewall of China. That is, when China first began censoring its internet, rather than telling websites explicitly what needed to be taken down, it just gave vague policy guidance about what "harmful" information would be a problem if it was found online, and backed that up with a serious threat: if any service provider was found not to have taken down information the government deemed problematic, it would face serious consequences. There was, of course, no such threat for taking down information that should not have been taken down. The end result was clear: when in doubt, take it down.
It remains preposterous to me that all across Western democracies, we've seen them taking the same basic approach -- insisting that platforms need to be much more aggressive in pulling down "bad" information (loosely defined), with significant liability attached to leaving it up (even if the content is legal), and little in the way of punishment for overblocking. And, over and over and over again studies have shown that when you set up a liability regime this way, you get massive overblocking. It seems that some countries see that as a feature, not a bug.
And, of course, with that approach now being picked up in other countries, China has apparently decided to ramp up its own approach. Under the guise of stopping "harmful" information online, China has released a draft of new regulations to punish internet companies that don't remove harmful information:
China kicked off a two-month campaign to crack down on commercial platforms and social media accounts that post finance-related information that’s deemed harmful to its economy.
The initiative will focus on rectifying violations including those that “maliciously” bad-mouth China’s financial markets and falsely interpret domestic policies and economic data, the Cyberspace Administration of China said in a statement late Friday. Those who republish foreign media reports or commentaries that falsely interpret domestic financial topics “without taking a stance or making a judgment” will also be targeted, it added.
Part of this new effort is a set of draft regulations for "algorithmic recommendation" systems. Kendra Schaefer wrote out a detailed Twitter thread analyzing the proposal, and just how far reaching it is.
Among the new rules (as translated by by Stanford's DigiChina center) are some... um... interesting ones.
Algorithmic recommendation service providers shall uphold mainstream value orientations, optimize algorithmic recommendation service mechanisms, vigorously disseminate positive energy, and advance the use of algorithms upwards and in the direction of good.
Algorithmic recommendation service providers may not use algorithmic recommendation services to engage in activities harming national security, upsetting the economic order and social order, infringing the lawful rights and interests of other persons, and other such acts prohibited by laws and administrative regulations. They may not use algorithmic recommendation services to disseminate information prohibited by laws and administrative regulations.
It limits what kinds of keywords can be used. "Harmful" information cannot be used, nor "biased user tags" (whatever that means):
Algorithmic recommendation service providers shall strengthen user model and user tagging management and perfect norms for logging interests in user models. They may not enter unlawful or harmful information as keywords into user interests or make them into user tags to use them as a basis for recommending information content, and may not set up discriminatory or biased user tags.
There are also rules against using algorithms to manipulate systems -- but also to "shield information," which seems to contradict the requirements to block lots of information.
Algorithmic recommendation service providers may not use algorithms to falsely register users, illegally trade accounts, or manipulate user accounts; or for false likes, comments, reshares, web page navigation, etc.; or to carry out flow falsification or flow hijack. They may not use algorithms to shield information, over-recommend, manipulate topic lists or search result rankings, or control hot search terms or selections and other such interventions in information presentation; or to carry out self-preferencing, improper competition, influence on online public opinion, or evasion of supervision and management.
That one about "influencing public opinion" sure is interesting. How do you avoid influencing public opinion?
Like many recent regulatory proposals in the US, Canada, the EU and elsewhere, China leans heavily on "transparency" when it comes to algorithms:
Algorithmic recommendation service providers shall notify users in a clear manner about the situation of the algorithmic recommendation services they provide, and publicize the basic principles, purposes and motives, operational mechanisms, etc., of the algorithmic recommendation services in a suitable manner.
But, of course, viewing this in the Chinese context shows why such mandatory transparency has risks. In this case, China wants this transparency so it can further regulate what information people will see via algorithmic recommendations, and to intimidate/threaten companies not to spread information the government wants suppressed.
Again, similar to various internet regulations in the West, China has a nod towards end-user control and empowerment:
Algorithmic recommendation service providers shall provide users with a choice to not target their individual characteristics, or provide users with a convenient option to switch off algorithmic recommendation services. Where users choose to switch off algorithmic recommendation services, the algorithmic recommendation service provider shall immediately cease providing related services.
Algorithmic recommendation service providers shall provide users with functions to choose, revise, or delete user tags used for algorithmic recommendation services.
Where users believe algorithmic recommendation service providers use algorithms in a manner creating a major influence on their rights and interests, they have the right to require the algorithmic recommendation service provider to give an explanation and adopt related measures to improve or remedy the situation.
But, again, when viewed in the Chinese context, you can easily see how this kind of mandate can be abused heavily.
There is also, a "think of the children" provision, because no internet regulations these days are complete without such a moral panic heart string pull:
Where algorithmic recommendation service providers provide services to minors, they shall fulfill duties for the online protection of minors according to the law, and make it convenient for minors to obtain information content beneficial to their physical and mental health, through developing models suited for use with minors, providing services suited to the specific characteristics of minors, etc.
Algorithmic recommendation service providers may not push information content toward minor users that may incite the minor to imitate unsafe conduct, or acts violating social morals, or lead the minor towards harmful tendencies or may influence minors’ physical and mental health in other ways; and they may not use algorithmic recommendation services to lead minors to online addiction
Also, there's a built in complaint mechanism.
Algorithmic recommendation service providers shall accept social supervision, set up convenient complaints and reporting interfaces, and promptly accept and handle complaints and reports from the public.
Algorithmic recommendation service providers shall establish user appeals channels and mechanisms, to standardize the handling of user appeals and the timely provision of feedback, and realistically ensure the lawful rights and interests of users.
Failing to abide by these rules will get the companies fined (relatively small amounts -- about $1k to $5k -- at first). However, it also opens them up to significant criminal liability as well.
As Schaefer notes, this is China "going beyond" the internet regulations in the EU. But, what's left unsaid is that some of this is enabled by just how far the EU and others have gone in trying to get the internet to paper over the societal problems created by government failures in other realms.
It should be leading us to wonder why it is that China is so eager to embrace and extend this approach to internet regulations, rather than as an endorsement of such an approach.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: algorithms, china, intermediary liability, internet regulations, online harms, online safety
Reader Comments
Subscribe: RSS
View by: Time | Thread
Not sure what the big deal is with the "vigorously disseminate positive energy" part. Just program some feng shui into the algorithm, turn it up to 11. Boom! Problem solved.
[ link to this | view in chronology ]
China: Why don't you knock it off with the negative waves!
[ link to this | view in chronology ]
A social credit score encompasses everyone's digital and actual life in China. The result is a never ending story version of Black Mirror meets 1984. Wait until they make everyone worship a giant hologram Mao Tse Tung in Tianamen Square where the massacre of students took place. The communist government will do this in the near future just to rub it in Hong Kong, the USA and N. Korea's faces.
[ link to this | view in chronology ]
Re:
FTFY.
[ link to this | view in chronology ]
Re: Re:
Agreed. China is never going to rub anything in North Korea's face, really.
[ link to this | view in chronology ]
No matter what you do
People are inquisitive.
We like to know things and understand whats happening in our lives.
Once you start the 'we are the Champions', we are the greatest, We love our flag, we are the best, and Place it onto every billboard and TV.
Things seem to go down hill. When we all have to believe we are the best country, and that we are Number one, and that WE ALL have to be the same. Keep pointing fingers of who isnt LIKE YOU. Gets people confused. Its kinda similar to how the USA was during WW1 and WW2 and every war we have had.
It gets worse. Declaring that companies and corps have no Need to Explain Profit margins. Dont enforce Rules like reporting the truth to your stockholders. Make everything Sound good.
Then get the Gov. to back you up, and blackball international companies no matter the costs. Even if you have to low ball the same goods those corps make with your own.
But who gets to pay for all this GREAT nationalism? How do you wade threw all the BS, and get to the bottom of How well is your nation? How well are your companies doing? BS goes both ways when you allow Lies and false truths to abound.
And then they wonder how we all got all confused.
[ link to this | view in chronology ]
Government of China declares itself illegal, fines and imprisons entire Party.
[ link to this | view in chronology ]
I feel there's a limit to how much the EU can be said to be responsible for the antics of a psudo-fascist totalitarian government. EU regulations may have to some extent informed the new policy, but in the end it would have happened anyway in one form or another. Hell even if they were listening to Mike's advice they'd just implement the same sort of heavy-handed censorship regime, just in a more effective manner.
[ link to this | view in chronology ]
Re:
Agreed. I take issue with the idea that governments that have shown to actually care about improving things for their citizens have to walk on eggshells, ever so slowly, because “What if oppressive governments see this and use it to justify being even more oppressive?” Oppressive regimes will oppress regardless of what laws liberal democracies put into place.
I also take issue with the idea that the laws that Western governments pass that look to tackle issues that the Internet has given rise to, are just politicians trying to paper over those governments’ supposed failures to improve societal problems.
The Internet is a part of society. We have to engage with and find solutions to the ways that the Internet has exacerbated those societal problems. If that involves Western democracies crafting new laws and legal frameworks, then so be it.
[ link to this | view in chronology ]
Re: Re:
I think the point is more that the UK should take stock and consider whether the way they're dealing with the internet is characteristic of a liberal democracy, or more like a dictatorship. If they're moving in the same direction as China, that's not a good sign.
Nobody is arguing that. The question is what form should those laws and frameworks take.
[ link to this | view in chronology ]
Re: Re: Re:
I disagree that the point of the article is about the UK. Mike talks about the US, EU, and Canada in the article as well. He looks to be using the UK law as a framing device to lead in to the rest of the article where he discusses other governments:
The main thrust of the article, to me at least, is Mike arguing that China is doing these things and using them to oppress people, they’re using the laws in Western democracies as an excuse, and we should slow down or stop before we give China any more ideas. I disagree. China will do whatever it wants, and if there’s an opportunity for their government to point to a law in Western democracy as a faux-reason they’re making a similarly-framed, but not similar-in-action law, they’ll take it, and use that opportunity to sow discord among this or that country’s politicians and people for “enabling China”.
My issue is that the questions and debates on what form those laws and frameworks should take keep getting stalled and slowed via tech advocates wringing their hands about an infinite number of trade-offs and consequences and assuming that the free market running its course and multi-billion-dollar corporations acting voluntarily will solve most of the issues with how the Internet exacerbates societal problems. Said advocates could do a lot of good by lending their expertise to progressive politicians who want to, in good faith, build out those laws and frameworks. Working together so that it’s done right at a moderate pace would be nice, rather than endless words of caution to slow it down so that it might as well not be happening at all, or worse, done quickly so as to actually screw a lot of things up.
[ link to this | view in chronology ]
Re: Re: Re: Re:
I would phrase that as a recognition that these are extremely complicated and difficult issues, and a lot of things people want the government to do to "fix" them would violate freedom of speech principles. It's rare to come across an internet reform proposal that wouldn't make things much worse than they are now.
[ link to this | view in chronology ]