how will they deal with sites like reddit and 4chan.
They won't. This ruling was very clear that sites like reddit and 4chan were a different matter. It is a very narrow ruling. Secondly, this ruling doesn't make anything illegal, and doesn't make anyone liable for anything.
All the ECtHR did (or can do) was say that it wasn't a disproportionate interference in this specific case for this site to be found liable for these comments in the way it was. And it came to this conclusion based on a whole host of specific factors - including some assumed from the Estonian Supreme Court's decision.
The ruling explicitly excludes forums and social media, and most comment sites.
Secondly, nothing was banned. The ECtHR didn't say that sites were liable for comments. They said that it wasn't a disproportionate interference with a site's freedom of expression in this specific case for this specific site to be liable for these specific comments to the extent they were found liable.
They still relied on a lot of the findings of law of the Estonian Supreme Court; including that the EU's limitations on liability didn't apply (if they did, it would be a different story).
This ruling isn't the end of the world. If it had gone the other way it would have been a great boost to Internet comments etc., but all this ruling does is maintain the status quo, giving national governments the option, in extreme circumstances, of imposing liability on sites for user comments.
The actual text of the ruling is here, for anyone interested.
From what I can tell this only covered sharing between the NSA and GCHQ of stuff from Prism or Upstream. Because the US Government has admitted that these programmes exist, Liberty etc. were able to bring a case in reference to them - unlike the UK's programmes (e.g. Tempora).
GCHQ admitted to having the legal power to access information collected through Prism/Upstream, and it was this power for which the limits on it weren't sufficiently clear. It took this case for them to admit what the limits were and what the legal position was - although they maintain that they haven't ever actually collected data from these programmes.
This is interesting as it means that GCHQ had powers that were unlawful (even if they never used them) despite repeated reassurances from all over that they didn't. And that it was only because of the information provided by Edward Snowden that these have now become legal.
By seeking to reveal unlawful surveillance and data sharing, Snowden has managed ot legalise some of it.
Surely he CAN take a better photo of a monkey than a monkey can, can't he?
Probably, but the value in the photo is the idea that it was taken by the monkey. There are any number of good photos of monkeys, but there aren't as many supposedly taken by a monkey. That's what made it famous.
It may depend on the jurisdiction but in some places (e.g. the UK afaik) there isn't really such a thing as the "public domain" in copyright law. It is a term used to describe things that are no longer covered by copyright.
Put simply, copyright property rights are a creation of the law, so they only exist if the law says that they exist. If there is no copyright there isn't anything for anyone - the public, the author, whoever - to own.
The image is registered with the USCO and is a part of a registered Image Rights under Guernsey Ordinance 2012.
This stood out for me, as this is one of the first uses I've seen of Guernsey's Image or Personality Rights. A couple of years ago Guernsey (a small island off the coast of Normandy, population around 65,000) made a big deal about being the first place in the world to have a specific "intellectual property right" covering images and personality.
The idea was that lots of famous people would pay to get their images registered so they could try to enforce these rights everywhere else. A quick check in the Image Rights Register (you have to register to see it) shows that they have managed to achieve an impressive ... 51 registrations, including the monkey one.
It seems the monkey image is one of three images registered in the name of Wildlife Personalities Limited (the company David Slater is the director of). One of them has "Wildlife Personalities" and the monkey photo, another is just the monkey photo, and the third is a second monkey photo.
What I find interesting is that based on a quick check of the relevant law I'm not even sure if the latter two are valid registrations, or that use of the image (in this or other articles) would be an infringement.
Specifically I think the images fail to be distinctive under 28(2) (i.e. widely associated with the company), nor do I think that the monkey photos are actually "images" within the definition of 3(1), as they show a picture of the monkey, not the company - and it is the company that is a 'personnage' and has potential image rights, not the monkey. Plus there's a specific "fair dealing for the purposes of news reporting" exception, and even a general "fair dealing" one.
So while he may be correct in that the image is registered, that registration may be invalid, and the use of the image may not be infringing.
For anyone interested, the full judgment can be found here. As with most legal issues, the situation is slightly more complicated than it may appear.
The right that the child is relying on (technically it is the child, not the ex-wife) is a tort that involves intentionally causing someone psychiatric harm. Intentionally causing someone physical harm has been illegal for a long time, the Victorian-era case referred to made psychiatric harm actionable as well (and has been followed and used since then).
As this was a pre-trial injunction the Court had limited evidence to go on, and had to decide whether the child was likely to succeed at trial - and it found he was, given evidence that the child was likely to read the material (the book is dedicated to him, contains parts addressed to him, is being serialised in a national newspaper, will probably be online, will be referred to in Wikipedia articles etc.), the material was likely to cause the child harm (not the stuff about sexual abuse specifically, but a load of stuff about self-harm), and that the father knew this (there was a clause in the parents' divorce about avoiding harm by disclosing information).
It's a messy case and situation, but the English legal position is generally to stop publication if there's a good chance it will be halted after a trial. The only thing the father (and publisher) lose is time and money - and the child has agreed to compensate them for any financial losses.
Firstly, I'm not sure if CoLP is actually trying to build a case against the DNS provider (although they probably think they could get an extradition request if needed), but more that they think they are being helpful - that DNS providers like EasyDNS don't want to host these sorts of sites and that CoLP is doing them a favour by politely letting them know their customers are evil criminals.
I imagine CoLP are pretty firm in their belief that the people who run the sites are "evil criminal scum" and therefore no one would want to do business with them.
The other possibility relates to the inclusion of the Serious Crime Act in that list of scary laws. I'm not sure I've seen that one included before, but it covers things like "encouraging or assisting an offence believing it will be committed" - which requires that belief, and the friendly CoLP email may go some way to demonstrating that EasyDNS knew offences might happen.
Again, assuming any offences are actually occurring. So far Fact Ltd is something like 1 for 4 in prosecutions against website operators.
The UK Government isn't censoring the story part of this (and it is just the Met police claiming that watching the video is illegal; I'm not sure anyone should trust them to determine what the law is) - footage from this video was all over the front pages today, and it was the top story for most of the newspapers.
Although that could just be a case of one law for the newspapers, one law for everyone else...
The articles aren't going anywhere - the search engines are required to disconnect the article from whatever term they're using that counts as a person's personal information. In theory anything that is of interest to a historian shouldn't be being de-linked.
Also, can we stop blaming the CJEU for this? The ruling is perfectly well-reasoned and it is a little difficult to imagine them ruling the other way without ignoring the law. The problem (to the extent that there is one) is with the underlying law (from the 90s) and how search engines either weren't thought of when it was drafted, or how modern search engines never thought they would have to comply with it when they set up.
There are a lot of terrorism-related offences in the UK. Also quite a few other offences that the police might think covers this. But I would go with ss1-2 Terrorism Act 2006; "Encouragement of terrorism" and "Dissemination of terrorist publications." I don't know if these offences have been tested in cases involving "non-terrorists" just downloading videos, but I imagine that won't stop the police from arresting people if they want to.
Section 3 also includes a notice and takedown procedure for "unlawfully terrorism-related" material.
The UK could do with removing some of its terrorism laws.
I didn't want to go into details because I admit the arguments are fairly subjective; that English+Welsh defamation law was never as bad as it was made out to be, it didn't really put the burden of proof on a defendant any more than any other civil law does (just a very large financial burden on both), and the changes brought in by the Defamation Act 2013 - like this website operators one - were pretty minor.
The big changes were the introduction of a single publication rule, and a presumption of not having a jury trial. The rest was mostly codifying the existing law (just to make things a little more confusing for defamation lawyers for a few years) and adding defences covering very narrow and rare situations - like this website operators one.
For anyone interested, the regulations the quoted article refers to are here, and there is some guidance from the Government here. I seem to remember picking up on this whole "attacking anonymity" thing back in 2012 when the Bill was being debated, pointing out how silly it is.
The entire section is pretty silly as well - the circumstances when it applies are pretty narrow (there are all sorts of other situations when a website operator would be immune), and the way the regulations are drafted there may be situations where an operator could remove a comment, but in order to comply with the section they would have to inform the claimant that they hadn't. The regulations were really badly drafted (with only closed, private consultation).
That said, as far as I know very few website operators knew or cared about them - most major sites have some sort of take-down system already, and defamation claims are so rare that it isn't worth the effort of setting up the automated systems required.
the UK has had terribly draconian defamation laws, that more or less put the burden on the accused to prove what they said wasn't defamation. This was incredibly plaintiff friendly and antagonistic to free speech. The situation was so bad that a whole campaign was mounted to finally update the UK's defamation laws, resulting in a big change that went into effect last year
I may be biased, but I think that almost every statement here is arguably false. But that's another story.
Mosley's really big win over the original newspaper, News of the World, was mostly over the fact that they called it a "Nazi sex party" and he insists that the party wasn't Nazi-themed.
If anything it was the other way around; the newspaper's only real defence was that it was Nazi-themed, and therefore in the public interest to report on. The court found (based on the evidence of Mosley and others involved) that it wasn't really anything to do with the Nazis, and thus there was no public interest in reporting the story (never mind running it with pictures and videos).
One of the big things that was 'interesting' about the case (which wasn't really a landmark one) was that he didn't bother suing the newspaper for libel (over the 'Nazi' part) - which would have been very expensive and time consuming - instead he went for privacy (essentially saying 'yes it happened, but it was none of your business').
The Court considered this argument and rejected it on the basis that much of the Internet runs on search engines.
In the original Spanish case the information was on an official government (or government-required?) website. But it was one data entry in thousands (if not millions), and no one would be able to find it unless they happened to go to that page. But because the page was indexed by Google, anyone putting the applicant's name into Google would find the page straight away.
Search engines make finding obscure bits of information (and connecting them up with other data - such as a person's name) really easy; that's their point. But it also means they are particularly important when it comes to data protection.
"the sites that actually contain this "privacy-invading" data ... are apparently immune from the very same existing laws"
Nope. The sites have to follow the law as well. The difference is that in some cases the sites' processing of the data (it is about processing, not containing - search engines do process personal data) may fall within an exception to the rules, which may not apply to the search engine.
But going after Google - in a case where they've provided a handy form - is far easier.
Bing was not a party to the proceedings. Can you show the mechanism? ... I also don't understand what statute could possibly be interpreted this way,... I can't see it applying to Bing or Yahoo...
There's a thing called the Data Protection Directive, which requires all EU Governments to introduce a law implementing its provisions across their country - you can read more about the Data Protection Directive, along with some stuff about this new ruling, here.
Article 12(b) of the Directive contains a sort of "right to be forgotten"; that a person can ask anyone covered by the Directive to stop processing their personal data if that processing falls outside the rules in the Directive.
This recent CJEU ruling (which is a reference interpreting the law) said - among other things - that the data processing search engines do is covered by the Directive.
The judges in this case knew exactly what they were doing, what the consequences would be, and how the Internet works. But they can't make up or change the law. Which is why the Commission and Parliament are in the process of coming up with a new Data Protection law - to fix this problem, and many other issues that have arisen with the law since it was drafted in the 90s.
tl;dr: the court case just says that search engines have to follow the law. So Bing, Yahoo, to the extent that they are search engines, will be covered by it.
CJEU rulings are references; the domestic court asks the CJEU some questions as to interpreting EU law. So while Google was one of the parties to the case, their ruling is about the law, specifically that search engines process personal data, so have to abide by the Data Protection rules.
All search engines are covered by the ruling. But we're only hearing about Google because... well, a cynic would say because what's happened is all PR, with no substance.
To correct you a bit, the BBC articles weren't removed from Google search; they were only removed when connected with the name of the person who had complained (which we think was one of the commenters). If Google did remove the article completely they went way beyond what the law requires of them.
Secondly, it wasn't a UK court ruling, but an EU one; and depending on how you define censorship, it was pro-censorship, but pro-privacy. Although all the court really did was say that search engines weren't immune from the existing laws.
On the post: Huge Loss For Free Speech In Europe: Human Rights Court Says Sites Liable For User Comments
Re:
All the ECtHR did (or can do) was say that it wasn't a disproportionate interference in this specific case for this site to be found liable for these comments in the way it was. And it came to this conclusion based on a whole host of specific factors - including some assumed from the Estonian Supreme Court's decision.
On the post: Huge Loss For Free Speech In Europe: Human Rights Court Says Sites Liable For User Comments
Re:
Secondly, nothing was banned. The ECtHR didn't say that sites were liable for comments. They said that it wasn't a disproportionate interference with a site's freedom of expression in this specific case for this specific site to be liable for these specific comments to the extent they were found liable.
They still relied on a lot of the findings of law of the Estonian Supreme Court; including that the EU's limitations on liability didn't apply (if they did, it would be a different story).
This ruling isn't the end of the world. If it had gone the other way it would have been a great boost to Internet comments etc., but all this ruling does is maintain the status quo, giving national governments the option, in extreme circumstances, of imposing liability on sites for user comments.
On the post: UK's Secretive Court Says Intelligence Sharing Between NSA And GCHQ Was Unlawful In The Past -- But Now It Isn't
Text of the ruling..
From what I can tell this only covered sharing between the NSA and GCHQ of stuff from Prism or Upstream. Because the US Government has admitted that these programmes exist, Liberty etc. were able to bring a case in reference to them - unlike the UK's programmes (e.g. Tempora).
GCHQ admitted to having the legal power to access information collected through Prism/Upstream, and it was this power for which the limits on it weren't sufficiently clear. It took this case for them to admit what the limits were and what the legal position was - although they maintain that they haven't ever actually collected data from these programmes.
This is interesting as it means that GCHQ had powers that were unlawful (even if they never used them) despite repeated reassurances from all over that they didn't. And that it was only because of the information provided by Edward Snowden that these have now become legal.
By seeking to reveal unlawful surveillance and data sharing, Snowden has managed ot legalise some of it.
On the post: Monkey Selfie Back In The News: Photographer Threatens Copyright Experts With His Confused Understanding Of Copyright
Re:
On the post: Monkey Selfie Back In The News: Photographer Threatens Copyright Experts With His Confused Understanding Of Copyright
Re: Re: Re: Everything must be owned.
Put simply, copyright property rights are a creation of the law, so they only exist if the law says that they exist. If there is no copyright there isn't anything for anyone - the public, the author, whoever - to own.
On the post: Monkey Selfie Back In The News: Photographer Threatens Copyright Experts With His Confused Understanding Of Copyright
Reference to Guernsey's Image Rights
The idea was that lots of famous people would pay to get their images registered so they could try to enforce these rights everywhere else. A quick check in the Image Rights Register (you have to register to see it) shows that they have managed to achieve an impressive ... 51 registrations, including the monkey one.
It seems the monkey image is one of three images registered in the name of Wildlife Personalities Limited (the company David Slater is the director of). One of them has "Wildlife Personalities" and the monkey photo, another is just the monkey photo, and the third is a second monkey photo.
What I find interesting is that based on a quick check of the relevant law I'm not even sure if the latter two are valid registrations, or that use of the image (in this or other articles) would be an infringement.
Specifically I think the images fail to be distinctive under 28(2) (i.e. widely associated with the company), nor do I think that the monkey photos are actually "images" within the definition of 3(1), as they show a picture of the monkey, not the company - and it is the company that is a 'personnage' and has potential image rights, not the monkey. Plus there's a specific "fair dealing for the purposes of news reporting" exception, and even a general "fair dealing" one.
So while he may be correct in that the image is registered, that registration may be invalid, and the use of the image may not be infringing.
But I'm not a Guernsey Image Rights lawyer...
On the post: UK Court Blocks Author From Publishing A Book About His Own Sexual Abuse, At Ex-Wife's Request
The right that the child is relying on (technically it is the child, not the ex-wife) is a tort that involves intentionally causing someone psychiatric harm. Intentionally causing someone physical harm has been illegal for a long time, the Victorian-era case referred to made psychiatric harm actionable as well (and has been followed and used since then).
As this was a pre-trial injunction the Court had limited evidence to go on, and had to decide whether the child was likely to succeed at trial - and it found he was, given evidence that the child was likely to read the material (the book is dedicated to him, contains parts addressed to him, is being serialised in a national newspaper, will probably be online, will be referred to in Wikipedia articles etc.), the material was likely to cause the child harm (not the stuff about sexual abuse specifically, but a load of stuff about self-harm), and that the father knew this (there was a clause in the parents' divorce about avoiding harm by disclosing information).
It's a messy case and situation, but the English legal position is generally to stop publication if there's a good chance it will be halted after a trial. The only thing the father (and publisher) lose is time and money - and the child has agreed to compensate them for any financial losses.
On the post: City Of London Police Issue Vague, Idiotic Warning To Registrars That They're Engaged In Criminal Behavior Because It Says So
A couple of thoughts
I imagine CoLP are pretty firm in their belief that the people who run the sites are "evil criminal scum" and therefore no one would want to do business with them.
The other possibility relates to the inclusion of the Serious Crime Act in that list of scary laws. I'm not sure I've seen that one included before, but it covers things like "encouraging or assisting an offence believing it will be committed" - which requires that belief, and the friendly CoLP email may go some way to demonstrating that EasyDNS knew offences might happen.
Again, assuming any offences are actually occurring. So far Fact Ltd is something like 1 for 4 in prosecutions against website operators.
On the post: There's A Reasonable Debate To Be Had About Showing The James Foley Beheading Video, But Claiming Its Illegal To Watch Is Ridiculous
Re:
Although that could just be a case of one law for the newspapers, one law for everyone else...
On the post: BBC Has 12 More Articles Shoved Down The Google Memory Hole Thanks To 'Right To Be Forgotten'
Re:
Also, can we stop blaming the CJEU for this? The ruling is perfectly well-reasoned and it is a little difficult to imagine them ruling the other way without ignoring the law. The problem (to the extent that there is one) is with the underlying law (from the 90s) and how search engines either weren't thought of when it was drafted, or how modern search engines never thought they would have to comply with it when they set up.
On the post: There's A Reasonable Debate To Be Had About Showing The James Foley Beheading Video, But Claiming Its Illegal To Watch Is Ridiculous
Sections 1-2 Terrorism Act 2006
Section 3 also includes a notice and takedown procedure for "unlawfully terrorism-related" material.
The UK could do with removing some of its terrorism laws.
On the post: Did UK Gov't Already Effectively Outlaw Anonymity Online With Its New Defamation Law?
Re: Re:
The big changes were the introduction of a single publication rule, and a presumption of not having a jury trial. The rest was mostly codifying the existing law (just to make things a little more confusing for defamation lawyers for a few years) and adding defences covering very narrow and rare situations - like this website operators one.
On the post: Did UK Gov't Already Effectively Outlaw Anonymity Online With Its New Defamation Law?
The entire section is pretty silly as well - the circumstances when it applies are pretty narrow (there are all sorts of other situations when a website operator would be immune), and the way the regulations are drafted there may be situations where an operator could remove a comment, but in order to comply with the section they would have to inform the claimant that they hadn't. The regulations were really badly drafted (with only closed, private consultation).
That said, as far as I know very few website operators knew or cared about them - most major sites have some sort of take-down system already, and defamation claims are so rare that it isn't worth the effort of setting up the automated systems required.
I may be biased, but I think that almost every statement here is arguably false. But that's another story.
On the post: Max Mosley Continues His Quixotic And Misguided Quest: Sues Google For Still Finding Photos He Doesn't Like
A minor correction
One of the big things that was 'interesting' about the case (which wasn't really a landmark one) was that he didn't bother suing the newspaper for libel (over the 'Nazi' part) - which would have been very expensive and time consuming - instead he went for privacy (essentially saying 'yes it happened, but it was none of your business').
On the post: Google Restores Some Links To Articles Removed In 'Right To Be Forgotten' Mess
Re: Re: Re: Re: Re:
In the original Spanish case the information was on an official government (or government-required?) website. But it was one data entry in thousands (if not millions), and no one would be able to find it unless they happened to go to that page. But because the page was indexed by Google, anyone putting the applicant's name into Google would find the page straight away.
Search engines make finding obscure bits of information (and connecting them up with other data - such as a person's name) really easy; that's their point. But it also means they are particularly important when it comes to data protection.
On the post: Google Restores Some Links To Articles Removed In 'Right To Be Forgotten' Mess
Re: Re: Re:
Nope. The sites have to follow the law as well. The difference is that in some cases the sites' processing of the data (it is about processing, not containing - search engines do process personal data) may fall within an exception to the rules, which may not apply to the search engine.
But going after Google - in a case where they've provided a handy form - is far easier.
On the post: Google Restores Some Links To Articles Removed In 'Right To Be Forgotten' Mess
Re: Re: Re: just Google?
Article 12(b) of the Directive contains a sort of "right to be forgotten"; that a person can ask anyone covered by the Directive to stop processing their personal data if that processing falls outside the rules in the Directive.
This recent CJEU ruling (which is a reference interpreting the law) said - among other things - that the data processing search engines do is covered by the Directive.
The judges in this case knew exactly what they were doing, what the consequences would be, and how the Internet works. But they can't make up or change the law. Which is why the Commission and Parliament are in the process of coming up with a new Data Protection law - to fix this problem, and many other issues that have arisen with the law since it was drafted in the 90s.
tl;dr: the court case just says that search engines have to follow the law. So Bing, Yahoo, to the extent that they are search engines, will be covered by it.
On the post: Google Restores Some Links To Articles Removed In 'Right To Be Forgotten' Mess
Re:
The CJEU ruling says that search engines process data, so have to comply with EU data protection rules.
The specific ruling was in reference to a case against Google, which is why the press have focused on them, but it covers any search engine.
On the post: Google Restores Some Links To Articles Removed In 'Right To Be Forgotten' Mess
Re: just Google?
All search engines are covered by the ruling. But we're only hearing about Google because... well, a cynic would say because what's happened is all PR, with no substance.
On the post: Google Restores Some Links To Articles Removed In 'Right To Be Forgotten' Mess
Re:
Secondly, it wasn't a UK court ruling, but an EU one; and depending on how you define censorship, it was pro-censorship, but pro-privacy. Although all the court really did was say that search engines weren't immune from the existing laws.
Next >>