from the out-with-the-good,-in-with-the-bad dept
New rules for social media companies and other hosts of third-party content have just gone into effect in India. The proposed changes to India's 2018 Intermediary Guidelines are now live, allowing the government to insert itself into content moderation efforts and make demands of tech companies some simply won't be able to comply with.
Now, under the threat of fines and jail time, platforms like Twitter (itself a recent combatant of the Indian government over its attempts to silence people protesting yet another bad law) can be held directly responsible for any "illegal" content it hosts, even as the government attempts to pay lip service to honoring long-standing intermediary protections that immunized them from the actions of their users.
Here's a really bland and misleading summary of the new requirements from the Economic Times, India's most popular business newspaper:
The guidelines propose additional responsibilities on social media companies. These include verifying users through mobile numbers, tracing origin of messages required by a court order and building automated tools to identify child pornography and terror-related content. All these requirements come under the ambit of under due diligence.
This sounds like pretty reasonable stuff. But it isn't because it goes much farther than what's summarized here and turns a whole lot of online discourse into potentially illegal content.
(a) belongs to another person and to which the user does not have any right to;
(b) is grossly harmful, harassing, blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another's privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever;
(c) harm minors in any way;
(d) infringes any patent, trademark, copyright or other proprietary rights;
(e) violates any law for the time being in force;
(f) deceives or misleads the addressee about the origin of such messages or communicates any information which is grossly offensive or menacing in nature;
(g) impersonates another person;
(h) contains software viruses or any other computer code, files or programs designed to interrupt, destroy or limit the functionality of any computer resource;
(i) threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign states, or public order, or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting any other nation.
(j) threatens public health or safety; promotion of cigarettes or any other tobacco products or consumption of intoxicant including alcohol and Electronic Nicotine Delivery System (ENDS) & like products that enable nicotine delivery except for the purpose & in the manner and to the extent, as may be approved under the Drugs and Cosmetics Act, 1940 and Rules made thereunder;
(k) threatens critical information infrastructure.
The new mandates demand platforms operating in India proactively scan all uploaded content to ensure it complies with India's laws.
The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.
This obligation is not only impossible to comply with (and is prohibitively expensive for smaller platforms and sites/online forums that don't have access to AI tools), it opens up platforms to prosecution simply for being unable to do the impossible. And complying with this directive to implement this demand undercuts the Safe Harbour protections granted to intermediaries by the Indian government.
If you're moderating all content prior to it going "live," it's no longer possible to claim you're not acting as an editor or curator. The Indian government grants Safe Harbour to "passive" conduits of information. The new law pretty much abolishes those because complying with the law turns intermediaries from "passive" to "active."
Broader and broader it gets, with the Indian government rewriting its "national security only" demands to cover "investigation or detection or prosecution or prevention of offence(s)." In other words, the Indian government can force platforms and services to provide information and assistance within 72 hours of notification to almost any government agency for almost any reason.
This assistance includes "tracing the origin" of illegal content -- something that may be impossible to comply with since some platforms don't collect enough personal information to make identification possible. Any information dug up by intermediaries in support of government action must be retained for 180 days whether or not the government makes use of it.
More burdens: any intermediary with more than 5 million users must establish permanent residence in India and provide on-call service 24/7. Takedown compliance has been accelerated from 36 hours of notification to 24 hours.
Very few companies will be able to comply with most of these directives. No company will be able to comply with them completely. And with the government insisting on adding more "eye of the beholder" content to the illegal list, the law encourages pre-censorship of any questionable content and invites regulators and other government agencies to get into the moderation business.
Filed Under: censorship, content moderation, filters, free speech, general monitoring, india, intermediary liability, social media