Intermediary Liability And Responsibilities Post-Brexit
from the the-entire-game-has-changed dept
This is a peculiar time to be an English lawyer. The UK has one foot outside the EU, and (on present intentions) the other foot will join it when the current transitional period expires at the end of 2020.
It is unclear how closely tied the UK will be to future EU law developments following any trade deal negotiated with the EU. As things stand, the UK will not have to implement future EU legislation, and is likely to have considerable freedom in many areas to depart from existing EU legislation.
The UK government has said that it has no plans to implement the EU Copyright Directive adopted in April 2019. Nor does it seem likely that it would have to follow whatever legislation may result from the European Commission's proposals for an EU Digital Services Act. Conversely, the government has also said that it has no current plans to change the existing intermediary liability provisions of the EU Electronic Commerce Directive, or the Directive's approach to prohibition of general monitoring obligations.
Looking across the Atlantic, there is the prospect of a future trade agreement between the UK and the USA. That has set off alarm bells in some quarters that the US government will want the UK to adopt an intermediary liability shield modeled on S.230 Communications Decency Act.
Domestically, the UK government is developing its Online Harms plans. The proposed legislation would impose a legal duty on user generated content-sharing intermediaries and search engines to prevent or inhibit many varieties of illegal or harmful UGC. Although branded a duty of care, the proposal is more akin to a broadcast-style content regulatory regime than to a duty of care as a tort lawyer would understand it. The regime would most likely be managed and enforced by the current broadcast regulator, Ofcom. As matters stand the legislation would not define harm, leaving Ofcom to decide (subject to some specific carve-outs) what should be regarded as harmful.
All this is taking place against the background of the techlash. This is not the place to get into the merits and demerits of that debate. The aim of this piece is to take an educational ramble around the UK and EU legal landscape, pausing en route to inspect and illuminate some significant features.
Liability Versus Responsibilities
The tour begins by drawing a distinction between liability and responsibilities.;
In the mid-1990s the focus was mostly on liability: the extent to which an intermediary can be held liable for unlawful activities and content of its users. The US and EU landmarks were S.230 CDA 1996 and S.512 DMCA 1998 (USA), and Articles 12 to 14 of the Electronic Commerce Directive 2000 (EU).
Liability presupposes the user doing something unlawful on the intermediary's platform. (Otherwise, there is nothing for the intermediary to be liable for.) The question is then whether the platform, as well as the user, should be made liable for the user's unlawful activity – and if so, in what circumstances. The risk (or otherwise) of potential liability may encourage the intermediary to act in certain ways. Liability regimes incentivise, but do not mandate.
Over time, the policy focus has expanded to take in responsibilities: putting an intermediary under a positive obligation to take action in relation to user content or activity.
A mandatory obligation to prevent users behaving in particular ways is different from being made liable for their unlawful activity. Liability arises from a degree of involvement in the primary unlawful activity of the user. Imposed responsibility does not necessarily rest on a user's unlawful behavior. The intermediary is placed under an independent, self-standing obligation – one that it alone can breach.
Responsibilities Imposed By Court Orders
Responsibilities first manifested themselves as mandatory obligations imposed on intermediaries by specific court orders, but still predicated on the existence of unlawful third party activities.
In the US this development withered on the vine with SOPA/PIPA in 2012. Not so in the EU, where copyright site blocking injunctions can be (and have often been) granted against internet service providers under Article 8(3) of the InfoSoc Directive. The Intellectual Property Enforcement Directive requires similar injunctions to be available for other IP rights. In the UK it is established that a site blocking injunction can be granted based on registered trade marks, and potentially in respect of other kinds of unlawful activity.
Limits to the actions that court orders can oblige intermediaries to take in respect of third party activities have been explored in numerous cases: amongst them, at EU Court of Justice level, detection and filtering of copyright infringing files in SABAM v Scarlet and SABAM v Netlog; detection and filtering of equivalent defamatory content in Glawischnig-Piesczek v Facebook; and worldwide delisting in Glawischnig-Piesczek v Facebook.
Such court orders tend not to be conceptualized in terms of remedying a breach by the intermediary. Rather, they are based on efficiency: the intermediary, as a choke point, should be co-opted as being in the best position to reduce unlawful activity by third parties. In UK law at least, the intermediary has no prior legal duty to assist – only to comply with an injunction if the court sees fit to grant one.
Responsibilities Imposed by Duties Of Care
Most recently the focus on intermediary responsibilities has broadened beyond specific court orders. It now includes the idea of a prior positive obligation, imposed on an intermediary by the general law, to take steps to reduce risks arising from user activities on the platform.
This kind of obligation, frequently labelled a duty of care, is contemplated by the UK Online Harms proposals and may form part of a future EU Digital Services Act.
In the form in which it has been adapted for the online sphere, a duty of care would impose positive obligations on the intermediary to prevent users from harming other users (and perhaps non-users). Putting aside the vexed question of what constitutes harm in the context of online speech, a legal responsibility to prevent activities of third parties is far from the norm. A typical duty of care is owed in respect of someone's own acts, not to prevent acts of third parties.
Although conceptually distinct from liability, an intermediary duty of care can interact and overlap with it. For example, a damages claim framed as breach of a duty of care may in some circumstances be barred by the ECD liability shields. In McFadden the rightsowner sought to hold a Wi-Fi operator liable for damages in respect of copyright infringement by users, founded on an allegation that the operator had breached a duty to secure its network. The CJEU found that the claim for damages was precluded by the Article 12 conduit shield, even though the claim was framed as breach of a duty rather than as liability for the users' copyright infringement as such.
At the other end of the spectrum, the English courts have held that if a regulatory sanction is sufficiently remote from specific user infringements as not to be in respect of those infringements, the sanction is not precluded by the ECD liability shields. The UK Online Harms proposals suggest that sanctions would be for breach of systemic duties, rather than penalties tied to failure to remove specific items of content.
Beyond Unlawfulness
Although intermediary liability is restricted to unlawfulness on the part of the user, responsibility is not. A self-standing duty of care is concerned with risk of harm. Harm may include unlawfulness, but is not limited to that.
The scope of such a duty of care depends critically on what is meant by harm. In English law, comparable offline duties of care are limited to objectively ascertainable physical injury and damage to physical property. The UK Online Harms proposals jettison that limitation in favor of undefined harm. Applied to lawful online speech, that is a subjective concept. As matters stand Ofcom, as the likely regulator, would in effect decide what does and does not constitute harm.
Article 15 ECommerce Directive
A preventative duty of care takes us into the territory of proactive monitoring and filtering. Article 15 ECD, which sits alongside the liability scheme enacted in Articles 12 to 14, prohibits Member States from imposing two kinds of obligation on conduits, caches or hosts: a general obligation to monitor information transmitted or stored, and a general obligation actively to seek facts or circumstances indicating illegal activity.
Article 15 does not on its face prohibit an obligation to seek out lawful but harmful activity, unless it constitutes a general obligation to monitor information. But in any event, for an EU Member State the EU Charter of Fundamental Rights would be engaged. The CJEU found the filtering obligations in Scarlet and Netlog to be not only in breach of Article 15, but also contrary to the EU Charter of Fundamental Rights. For a non-EU state such as the UK, the European Convention on Human Rights would be relevant.
So far, the scope of Article 15 has been tested in the context of court orders. The principles established are nevertheless applicable to duties of care imposed by the general law, with the caveat that Recital (48) permits hosts to be made subject to "duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities." What those "certain types" might be is not stated. In any event the recital does not on the face of it apply to lawful activities deemed to be harmful.
The Future Post-Brexit
Both the UK and the EU are currently heading down the road of imposing responsibilities on intermediaries, while professing to leave the liability provisions of the ECD untouched. That is conceptually possible for some kinds of responsibilities, but difficult to navigate in practice. Add the prohibition on general monitoring obligations and the task becomes harder, especially if the prohibition stems not just from the ECD (which could be diluted in future legislation) but from the EU Charter of Fundamental Rights and the ECHR.
The French Loi Avia, very much concerned with imposing responsibilities, was recently partially struck down by the French Constitutional Council. Whilst no doubt it will return in a modified form, it is nevertheless a salutary reminder of the relevance of fundamental rights.
As for UK-US trade discussions, Article 19.17 of the US-Mexico-Canada Agreement has set a precedent for inclusion of intermediary liability. Whether the wording of Article 19.17 really does mandate full S.230 immunity, as some have suggested, is another matter. Damian Collins MP, asking a Parliamentary Question on 2 March 2020, said:
"the US-Mexico-Canada trade agreement required the insertion of the section 230 provisions of the United States' Communications Decency Act, which give immunity from liability to the big social media companies."
The Trade Secretary replied:
"I can confirm that we stand by our online harms commitment, and nothing in the US trade deal will affect that."
Although the USMCA agreement uses language that tracks S.230, it does not fully replicate it. Notably, it does not use the magic word 'publisher' that appears in S.230 and which Zeran v America Online interpreted in 1997 as embracing both strict primary publisher liability and knowledge-based secondary publisher (a.k.a. distributor) liability.
Instead, Article 19.17 precludes liability as an "information content provider," defined as "a person or entity that creates or develops, in whole or in part, information provided through the Internet or another interactive computer service" That aptly describes a primary publisher. But if that language does not cover secondary publishers, then if it were to appear in a UK-US trade agreement it would seem not to preclude a hosting liability regime akin to the existing ECD Article 14.
Graham Smith is Of Counsel at Bird & Bird LLP, London, England. He is the editor and main author of the English law textbook Internet Law and Regulation (5th ed 2020, Sweet & Maxwell). The views expressed in this article are the personal views of the author.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: brexit, content moderation, digital services act, duty of care, eu, eu copyright directive, liability, online harms, responsibilities
Reader Comments
Subscribe: RSS
View by: Thread
So you cannot monitor everything your users do, but must be watching when they do something illegal, pity that working crystal balls are rarer than hens teeth.
[ link to this | view in chronology ]
The shape of the "duty of care".
[ link to this | view in chronology ]
Re: The shape of the "duty of care".
To moderators: sorry, I got the formatting wrong on the first version I posted, so I posted a second with paragraph breaks. Please delete this and substitute my second attempt.
[ link to this | view in chronology ]
The shape of the "duty of care".
From my experience in other regulated industries, it will play out like this.
OfCom will hold a consultation exercise in which they talk to the major companies that they plan to regulate (who may well form a lobby organisation for this purpose). Out of that will come an official "guidance" document. I put the term "guidance" in quotes because, while it won't be mandatory to follow this document, the regulator will be on record as saying that, if you do so, you have jumped high enough. "How high?" is the fundamental question that any regulated company wants answered, so in practice following the guidance will become official policy at all the regulated companies.
Hence the "guidance" is where the rubber actually meets the road. The industry will have two concerns: 1: make sure the guidance clearly specifies exactly how high they must jump, and 2: make sure the height is optimal for their businesses. This is not necessarily "as low as possible" because this height is a barrier to competition, which is always nice for an incumbent to have.
The civil servants on the other side of the table will want to make the process effective at preventing on-line harms, so it becomes a matter of horse-trading over costs and perceived benefits. The actual end users who will suffer the harms aren't at the table, and neither are other users who will suffer from being over-moderated. Hence the result is likely to reflect industry concerns more than anything else. In theory the civil servants should be protecting the users, but they have to do so through the lens of government policy, and government policy on this issue is primarily aimed at staying out of the news.
Given the vagueness of the top-level requirement of "prevent on-line harms" the guidance document will probably opt for a set of measurable goals, such as 95% of user flags checked by a human within in 1 hour, defined levels of keyword scanning, use of image signatures etc. What it won't do is require anything impossible like preventing 100% of "harmful" content being posted ever. It probably won't even put accuracy requirements on the review process, because how do you measure accuracy?
You can expect an appeals process to make an appearance here, but with much lower requirements on its effectiveness and timeliness. Nobody wants to deal with appeals, so in practice its going to be nigh on impossible to get anything reversed.
There is of course no democracy involved here. Parliament is 2 or 3 levels away from this level of detail. The guidance probably won't even be a government publication: its more likely to be published by that lobby group I mentioned for £200 per copy. That way ordinary members of the public can't get hold of it and start arguing that it wasn't followed in their case.
A year or so after this system comes into force there will probably be an expose on Panorama about how this regulatory system is failing to prevent some online harms. It will feature silhouettes of frightened or abused women with traumatic stories set against bland official statements about the commitment of government and industry working together to stop this sort of thing.
[ link to this | view in chronology ]