from the oh-come-on dept
The idea of an open "global" internet keeps taking a beating -- and the worst offender is not, say, China or Russia, but rather the EU. We've already discussed things like the EU Copyright Directive and the Terrorist Content Regulation, but it seems like every day there's something new and more ridiculous -- and the latest may be coming from the Court of Justice of the EU (CJEU), which frequently is a bulwark against overreaching laws regarding the internet, but sometimes (too frequently, unfortunately) gets things really, really wrong (saying the "Right to be Forgotten" applied to search engines was one terrible example).
And now, the CJEU's Advocate General has issued a recommendation in a new case that would be hugely problematic for the idea of a global open internet that isn't weighted down with censorship filters. The Advocate General's recommendations are just that: recommendations for the CJEU to consider before making a final ruling. However, as we've noted in the past, the CJEU frequently accepts the AG's recommendations. Not always. But frequently.
The case here involves a an attempt to get Facebook to delete critical information of a politician in Austria under Austrian law. In the US, of course, social media companies are not required to delete such information. The content itself is usually protected by the 1st Amendment, and the platforms are then protected by Section 230 of the Communications Decency Act that prevents them from being liable, even if the content in question does violate the law (though, importantly, most platforms will still remove such content if it's been determined by a court to violate the law).
In the EU, the intermediary liability scheme is significantly weaker. Under the E-Commerce Directive's rules, there is an exemption of liability, but it's much more similar to the DMCA's safe harbors for copyright-infringing material in the US. That is, the liability exemptions only occur if the platform doesn't have knowledge of the "illegal activity" and if they do get such knowledge, they need to remove the content. There is also a prohibition on a "general monitoring" requirement (i.e., filters).
The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva Glawischnig-Piesczek, and added some comments along with the link. Specifically:
That user also published, in connection with that article, an accompanying disparaging comment about the applicant accusing her of being a ‘lousy traitor of the people’, a ‘corrupt oaf’ and a member of a ‘fascist party’.
In the US -- some silly lawsuits notwithstanding -- such statements would be clearly protected by the 1st Amendment. Apparently not so much in Austria. But then there's the question of Facebook's responsibility.
An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The original demand was also that Facebook be required to prevent "equivalent content" from appearing as well. On appeal, a court denied Facebook's request that it only had to comply in Austria, and also said that such "equivalent content" could only be limited to cases where someone then alerted Facebook to the "equivalent content" being posted (and, thus, not a general monitoring requirement).
From there, the case went to the CJEU, who was asked to determine if such blocking needs to be global and how should the "equivalent content" question be handled.
And, then, basically everything goes off the rails. First up, the Advocate General, seems to think that -- like many misguided folks concerning CDA 230 -- there's some sort of "neutrality" requirement for internet platforms, and that doing any sort of monitoring might lose their safe harbors for no longer being neutral. This is mind-blowingly stupid.
It should be observed that Article 15(1) of Directive 2000/31 prohibits Member States from imposing a general obligation on, among others, providers of services whose activity consists in storing information to monitor the information which they store or a general obligation actively to seek facts or circumstances indicating illegal activity. Furthermore, it is apparent from the case-law that that provision precludes, in particular, a host provider whose conduct is limited to that of an intermediary service provider from being ordered to monitor all (9) or virtually all (10) of the data of all users of its service in order to prevent any future infringement.
If, contrary to that provision, a Member State were able, in the context of an injunction, to impose a general monitoring obligation on a host provider, it cannot be precluded that the latter might well lose the status of intermediary service provider and the immunity that goes with it. In fact, the role of a host provider carrying out general monitoring would no longer be neutral. The activity of that host provider would not retain its technical, automatic and passive nature, which would imply that that host provider would be aware of the information stored and would monitor it.
Say what now? It's right that general monitoring is not required (and explicitly rejected) in the law, but the corollary that deciding to do general monitoring wipes out your safe harbors is... crazy. Here, the AG is basically saying we can't have a general monitoring obligation (good) because that would overturn the requirement of platforms to be neutral (crazy):
Admittedly, Article 14(1)(a) of Directive 2000/31 makes the liability of an intermediary service provider subject to actual knowledge of the illegal activity or information. However, having regard to a general monitoring obligation, the illegal nature of any activity or information might be considered to be automatically brought to the knowledge of that intermediary service provider and the latter would have to remove the information or disable access to it without having been aware of its illegal content. (11) Consequently, the logic or relative immunity from liability for the information stored by an intermediary service provider would be systematically overturned, which would undermine the practical effect of Article 14(1) of Directive 2000/31.
In short, the role of a host provider carrying out such general monitoring would no longer be neutral, since the activity of that host provider would no longer retain its technical, automatic and passive nature, which would imply that the host provider would be aware of the information stored and would monitor that information. Consequently, the implementation of a general monitoring obligation, imposed on a host provider in the context of an injunction authorised, prima facie, under Article 14(3) of Directive 2000/31, could render Article 14 of that directive inapplicable to that host provider.
I thus infer from a reading of Article 14(3) in conjunction with Article 15(1) of Directive 2000/31 that an obligation imposed on an intermediary service provider in the context of an injunction cannot have the consequence that, by reference to all or virtually all of the information stored, the role of that intermediary service provider is no longer neutral in the sense described in the preceding point.
So the AG comes to a good result through horrifically bad reasoning.
However, while rejecting general monitoring, the AG then goes on to talk about why more specific monitoring and censorship is probably just fine and dandy, with a somewhat odd aside about how the "duration" of the monitoring can make it okay. However, the key point is that the AG has no problem with saying, once something is deemed "infringing," that it can be a requirement on the internet platform to have to remove new instances of the same content:
In fact, as is clear from my analysis, a host provider may be ordered to prevent any further infringement of the same type and by the same recipient of an information society service. (24) Such a situation does indeed represent a specific case of an infringement that has actually been identified, so that the obligation to identify, among the information originating from a single user, the information identical to that characterised as illegal does not constitute a general monitoring obligation.
To my mind, the same applies with regard to information identical to the information characterised as illegal which is disseminated by other users. I am aware of the fact that this reasoning has the effect that the personal scope of a monitoring obligation encompasses every user and, accordingly, all the information disseminated via a platform.
Nonetheless, an obligation to seek and identify information identical to the information that has been characterised as illegal by the court seised is always targeted at the specific case of an infringement. In addition, the present case relates to an obligation imposed in the context of an interlocutory order, which is effective until the proceedings are definitively closed. Thus, such an obligation imposed on a host provider is, by the nature of things, limited in time.
And then, based on nothing at all, the AG pulls out the "magic software will make this work" reasoning, insisting that software tools will make sure that the right content is properly censored:
Furthermore, the reproduction of the same content by any user of a social network platform seems to me, as a general rule, to be capable of being detected with the help of software tools, without the host provider being obliged to employ active non-automatic filtering of all the information disseminated via its platform.
This statement... is just wrong? First off, it acts as if using software to scan for the same content is somehow not a filter. But it is. And then it shows a real misunderstanding about the effectiveness of filters (and the ability of some to trick filters). And there's no mention of false positives. I mean, in this case here, a politician was called a corrupt oaf. How should Facebook be forced to block that. Is any use of the phrase "corrupt oaf" now blocked? Perhaps it would have to be "corrupt oaf" and the politician, Eva Glawischnig-Piesczek, that need to be together to be blocked. But, in that case, does it mean that this article itself cannot be posted on Facebook? So many questions...
The AG then insists that somehow this isn't too burdensome (based on what, exactly?) and seems to make the mistake of many non-technical people, who think that filters are (a) much better than they are, and (b) not dealing with significant gray areas all the time.
First of all, seeking and identifying information identical to that which has been characterised as illegal by a court seised does not require sophisticated techniques that might represent an extraordinary burden.
And, I mean, perhaps that's true for Facebook -- but it certainly could represent a much bigger burden for lots of other, smaller providers. Like, us, for example.
Hilariously, as soon as the AG is done saying the filtering is easy, the recommendation notes that (oh right!) context may be important:
Last, such an obligation respects internet users’ fundamental right to freedom of expression and information, guaranteed in Article 11 of the Charter, in so far as the protection of that freedom need not necessarily be ensured absolutely, but must be weighed against the protection of other fundamental rights. As regards the information identical to the information that was characterised as illegal, it consists, prima facie and as a general rule, in repetitions of an infringement actually characterised as illegal. Those repetitions should be characterised in the same way, although such characterisation may be nuanced by reference, in particular, to the context of what is alleged to be an illegal statement.
Next up is the question of blocking "equivalent content." The AG, properly notes that determining what is, and what is not, "equivalent" represents quite a challenge -- and at least seeks to limit what may be ordered to be blocked, saying that it should only apply to content from the same user, and that any injunction be quite specific in what needs to be blocked:
I propose that the answer to the first and second questions, in so far as they relate to the personal scope and the material scope of a monitoring obligation, should be that Article 15(1) of Directive 2000/31 must be interpreted as meaning that it does not preclude a host provider operating a social network platform from being ordered, in the context of an injunction, to seek and identify, among all the information disseminated by users of that platform, the information identical to the information that was characterised as illegal by a court that has issued that injunction. In the context of such an injunction, a host provider may be ordered to seek and identify the information equivalent to that characterised as illegal only among the information disseminated by the user who disseminated that illegal information. A court adjudicating on the removal of such equivalent information must ensure that the effects of its injunction are clear, precise and foreseeable. In doing so, it must weigh up the fundamental rights involved and take account of the principle of proportionality.
Then, finally, it gets to the question of global blocking -- and basically says that nothing in EU law prevents a member state, such as Austria, from ordering global blocking, and therefore, that it can do so -- but that local state courts should consider the consequences of ordering such global takedowns.
... as regards the territorial scope of a removal obligation imposed on a host provider in the context of an injunction, it should be considered that that obligation is not regulated either by Article 15(1) of Directive 2000/31 or by any other provision of that directive and that that provision therefore does not preclude that host provider from being ordered to remove worldwide information disseminated via a social network platform. Nor is that territorial scope regulated by EU law, since in the present case the applicant’s action is not based on EU law.
Regarding the consequences:
To conclude, it follows from the foregoing considerations that the court of a Member State may, in theory, adjudicate on the removal worldwide of information disseminated via the internet. However, owing to the differences between, on the one hand, national laws and, on the other, the protection of the private life and personality rights provided for in those laws, and in order to respect the widely recognised fundamental rights, such a court must, rather, adopt an approach of self-limitation. Therefore, in the interest of international comity, (51) to which the Portuguese Government refers, that court should, as far as possible, limit the extraterritorial effects of its junctions concerning harm to private life and personality rights. (52) The implementation of a removal obligation should not go beyond what is necessary to achieve the protection of the injured person. Thus, instead of removing the content, that court might, in an appropriate case, order that access to that information be disabled with the help of geo-blocking.
That is a wholly unsatisfying answer, given that we all know how little many governments think about "self-limitation" when it comes to censoring critics globally.
And now we have to wait to see what the court says. Hopefully it does not follow these recommendations. As intermediary liability expert Daphne Keller from Stanford notes, there are some serious procedural problems with how all of this shakes out. In particular, because of the nature of the CJEU, they will only hear from some of the parties whose rights are at stake (a lightly edited quote of her tweetstorm):
The process problems are: (1) National courts don’t have to develop a strong factual record before referring the case to the CJEU, and (2) Once cases get to the CJEU, experts and public interest advocates can’t intervene to explain the missing info. That’s doubly problematic when – as in every intermediary liability case – the court hears only from (1) the person harmed by online expression and (2) the platform but NOT (3) the users whose rights to seek and impart information are at stake. That's an imbalanced set of inputs. On the massively important question of how filters work, the AG is left to triangulate between what plaintiff says, what Facebook says, and what some government briefs say. He uses those sources to make assumptions about everything from technical feasibility to costs.
And, in this case in particular, that leads to some bizarre results -- including quoting a fictional movie as evidence.
In the absence of other factual sources, he also just gives up and quotes from a fictional movie – The Social Network -- about the permanence of online info.
That, in particular, is most problematic here. It is literally the first line of the AG's opinion:
The internet’s not written in pencil, it’s written in ink, says a character in an American film released in 2010. I am referring here, and it is no coincidence, to the film The Social Network.
But a quote in a film that is arguably not even true, seems like an incredibly weak basis for a law that fundamentally could lead to massive global censorship filters across the internet. Again, one hopes that the CJEU goes in a different direction, but I wouldn't hold my breath.
Filed Under: advocate general, cjeu, corrupt oaf, defamation, e-commerce directive, eu, eva glawischnig-piesczek, filters, global censorship, intermediary liability, jrusidiction, monitoring
Companies: facebook