from the so-this-is-really-bad dept
Yes, it's time for this week's edition of "how is the EU fucking up the internet." Over the summer we wrote about an important case in front of the Court of Justice of the European Union (CJEU), warning that the Advocate General's recommendations would lead to mass filtering and censorship of the internet, seemingly going against existing law and precedent that supports freedom of expression and which says that automated filtering violates human rights. Welp. So much for that.
Glawischnig-Piesczek v. Facebook is really, really bad. If you don't recall, the case dealt with an Austrian politician, Eva Glawischnig-Piesczek, who got really upset for comments on Facebook that called her a "lousy traitor of the people," a "corrupt oaf," and a member of a "fascist party." While one may regret this level of political discourse, it is pretty typical. I mean, that last one is basically a part of Godwin's Law, which shows just how common it is for such an accusation to be thrown around. However, Glawischnig-Piesczek sued Facebook. In the US, thanks to Section 230, Facebook clearly wouldn't be liable for such statements. In Europe, with the much less strict E-commerce Directive, an Austrian court not only found that Facebook must remove such content, but that the block should be global and that "equivalent content" must also be blocked.
As we noted back in June, the Advocate General's thoughts on the case were muddled with technical illiteracy (and even some basic legal confusion). It even suggested that monitoring your own content should wipe out your liability safe harbors (which is totally backwards). And then made a weird comment about how moderation efforts need to be "neutral."
Tragically, rather than fix things, the CJEU decided to just come up with the worst possible outcome. It says that's it's perfectly fine and dandy for EU law and a national court to order any website provider to install filters:
Given that a social network facilitates the swift flow of information stored by the host provider between its different users, there is a genuine risk that information which was held to be illegal is subsequently reproduced and shared by another user of that network.
In those circumstances, in order to ensure that the host provider at issue prevents any further impairment of the interests involved, it is legitimate for the court having jurisdiction to be able to require that host provider to block access to the information stored, the content of which is identical to the content previously declared to be illegal, or to remove that information, irrespective of who requested the storage of that information. In particular, in view of the identical content of the information concerned, the injunction granted for that purpose cannot be regarded as imposing on the host provider an obligation to monitor generally the information which it stores, or a general obligation actively to seek facts or circumstances indicating illegal activity, as provided for in Article 15(1) of Directive 2000/31.
That's basically the entire discussion -- which is troubling -- because it doesn't recognize (at all) how bad such filters might be. First of all, for smaller sites, such filters may be prohibitively expensive. Second, such filters are notoriously bad. They provide both false positives and false negatives, and the court doesn't even seem to contemplate how this might lead to overcensorship and/or what to do if the filters actually fail to catch a reposting of the content.
But it gets worse from there. Even if you argue that identical content to that already adjudicated as violating the law shouldn't be hard to filter, how do you deal with "equivalent" content? The CJEU basically says "meh, I'm sure it can be done."
It is apparent from the information set out in the order for reference that, in using the words ‘information with an equivalent meaning’, the referring court intends to refer to information conveying a message the content of which remains essentially unchanged and therefore diverges very little from the content which gave rise to the finding of illegality.
In that regard, it should be made clear that the illegality of the content of information does not in itself stem from the use of certain terms combined in a certain way, but from the fact that the message conveyed by that content is held to be illegal, when, as in the present case, it concerns defamatory statements made against a specific person.
It follows therefore that, in order for an injunction which is intended to bring an end to an illegal act and to prevent it being repeated, in addition to any further impairment of the interests involved, to be capable of achieving those objectives effectively, that injunction must be able to extend to information, the content of which, whilst essentially conveying the same message, is worded slightly differently, because of the words used or their combination, compared with the information whose content was declared to be illegal. Otherwise, as the referring court made clear, the effects of such an injunction could easily be circumvented by the storing of messages which are scarcely different from those which were previously declared to be illegal, which could result in the person concerned having to initiate multiple proceedings in order to bring an end to the conduct of which he is a victim.
Conceptually, you can see how the court would think this. Obviously, it's no good if the court has declared that a certain statement is illegal, and someone tries to get around that with a tiny tweak of a letter or something. But, how the hell are companies supposed to filter for that? Here, the CJEU tries to split the baby by saying that any injunction should spell out what counts as equivalent content:
In light of the foregoing, it is important that the equivalent information referred to in paragraph 41 above contains specific elements which are properly identified in the injunction, such as the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal. Differences in the wording of that equivalent content, compared with the content which was declared to be illegal, must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content.
In those circumstances, an obligation such as the one described in paragraphs 41 and 45 above, on the one hand — in so far as it also extends to information with equivalent content — appears to be sufficiently effective for ensuring that the person targeted by the defamatory statements is protected. On the other hand, that protection is not provided by means of an excessive obligation being imposed on the host provider, in so far as the monitoring of and search for information which it requires are limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies.
In other words, it won't be up to the platform to determine what's equivalent (mostly), and it won't have to "generally monitor"... except that it does need to monitor generally for whatever words and phrases might be in the injunction.
Again, all of this is done without even the slightest thought to how that is likely to be abused or (more likely) have tons of false positives and false negatives that could create massive censorship.
Oh, and finally, the court says that such blocking orders can be global:
In order to answer that question, it must be observed that, as is apparent, notably from Article 18(1), Directive 2000/31 does not make provision in that regard for any limitation, including a territorial limitation, on the scope of the measures which Member States are entitled to adopt in accordance with that directive.
Consequently, and also with reference to paragraphs 29 and 30 above, Directive 2000/31 does not preclude those injunction measures from producing effects worldwide.
The only limitation on that is based on what each member state in the EU decides for its own laws implementing the E-Commerce Directive.
As Stanford Law's Daphne Keller explains in this excellent thread, a huge part of the problem here is that the users of Facebook are not represented in court -- so the court only was determining if mandated filters would harm Facebook's rights, and the answer is no. But that ignores that these decisions will impact the rights of basically all of Facebook's users.
Under this ruling, it seems likely that Facebook users in Austria won't even be able to see this article because it mentions the "banned words" that were used to describe the politician -- and those have been declared illegal. That's a key issue here: declaring "words" illegal without context is a huge problem. But if you're talking about filtering, then you're removing even the possibility of exploring the context of the use of the words. That's a problem.
Separately, if you're thinking that it sounds like this ruling conflicts with the big CJEU ruling from last month denying a French attempt to have the Right to be Forgotten apply globally, that's not quite true. The French/Google ruling was more limited in scope, and basically left as an open question whether member states could craft a right to be forgotten law that might apply globally. In this case, it found that Austria's defamation law could.
Either way, this is going to create quite a mess. Once again, it will make it nearly impossible for small internet platforms to feel comfortable in Europe. At any point, a court might order them to filter out certain content, and to also filter "equivalent" content and do so globally. That's absolutely true in Austria, and could also be true based on the specific laws in other countries. That's a recipe for censorship and widespread abuse. It will also make it difficult for smaller internet platforms in the EU to ever become larger internet platforms.
Filed Under: austria, blocking, cjeu, defamation, equivalent content, eu, eva glawischnig-piesczek, filtering, jurisdiction, monitoring
Companies: facebook