It's always impressive when you see lawyers file lawsuits which suggest they're not entirely familiar with the law. Eric Goldman points us to a lawsuit filed by a Nevada lawyer, Jonathan Goldsmith, suing Facebook and two Facebook users for defamation, due to some mean Facebook comments.
Of course, Section 230 clearly shields Facebook from the actions of its users in defamation cases. As a lawyer, aren't you supposed to be aware of the law? There are some other oddities in the lawsuit as well, such as claiming both slander and libel for the Facebook comments. Normally, slander applies to spoken comments, libel to written. Now, I recall at least one case (in the UK, not the US), where the court suggested online forums were more like slander than libel, but it still seems a bit odd to see it in this case. And, either way, he claims both slander and libel. There are a few other oddities as well, including a claim that the mean comments on Facebook caused him to "seize" advertising on the site. I'm assuming he meant "cease"?
Yet again, it seems like sometimes even when people say mean stuff about you online, which might not be true, it's better to just shrug it off than filing a lawsuit, drawing a lot more attention to the mean things... and anything else that might get attention.
Rose M. Welch points us to a report from Poynter about the site Bedbugregistry.com. As you might have heard, there's been a lot of attention paid to bedbug infestations lately, and that site is often cited by journalists covering the story. According to the Poynter report, hotel owners have often threatened to sue the site because they don't like seeing their hotels on the list. I'm sure they don't, but it's disappointing that the Poynter article doesn't point out that the operator of the site, Maciej Ceglowski, is protected by Section 230. That seems like an important element of the story. As it is, it kind of suggests that a hotel owner could have a lawsuit. While they could file one, it's unlikely that it would get very far, due to the clear safe harbors in Section 230.
In the US, for the most part, we have legal safe harbors that enforce (usually) common sense points about liability into law. The basic idea is that a third party service provider or tool should not be held liable for what users do with those tools. Unfortunately, there are some problems with how this has been interpreted in the US at times, but at least the basics of reasonable safe harbors are there. And this is important. It makes no sense to blame a service provider for actions of users, but oftentimes those third parties are the primary targets of legal action, because they have more money and because it's an "easier" target. Over in Russia, apparently, there are no such legal safe harbors in place, and the biggest Russian internet companies, along with Google, all put out an open letter to the entertainment industry saying that it's time to stop blaming the internet companies for the actions of their users. It seems there have been some lawsuits in Russia against these sites, and they're not happy about it.
Part of the open letter appears to be an attempt to set up, without a legal basis, an agreement for a notice-and-takedown process of sorts. This is a bit disappointing of course. We've had such a notice-and-takedown process in the US for a while, and we've seen that it's quite frequently abused to takedown free speech and fair use content for reasons that have nothing to do with actual copyright issues. At the very least, it would have been better for these service providers to offer a notice-and-notice provision, where the users are notified that a copyright holder believes their use is infringing, letting them respond before the content is just taken down.
Back at the G20 meetings in July in Toronto, there were numerous stories of police overreacting and arresting protesters with little reason whatsoever. Perhaps the most noteworthy story that got attention was the story of "Officer Bubbles," the name given to a police officer, named Adam Josephs, who threatened to arrest a woman for assault if the bubbles she was blowing landed on him. You can see the video here:
Officer Bubbles became a bit of an internet phenomenon, and others built on it, as normally happens in internet memes. Apparently, one person made cartoon versions of Officer Bubble arresting various famous people, such as President Obama and Santa Claus. Because of that, Officer Adam Josephs has now filed a $1.2 million defamation lawsuit. The press reports were a bit unclear, with some saying he was actually suing YouTube, and others saying he was just asking YouTube to hand over the names. However, Howard Knopf links to what certainly appears to be the legal filing in question, and Josephs is suing YouTube and claiming that it's responsible for publishing the videos and comments:
While Canada (for whatever reason) does not have a Section 230-type safe harbor, protecting service providers from liability of actions of their users, this still seems misguided. It's pretty ridiculous to claim that YouTube is somehow responsible for these videos or comments. However, as you can see in the filing above, in every instance, Officer Josephs appears to accuse YouTube of "publishing or republishing" the works, thus making it liable. One would hope that Google would fight strongly over such ridiculous claims.
As you read through the lawsuit, some of the YouTube comments Josephs is suing over are pretty silly, and it's difficult to see how they're worth a lawsuit. I mean, here's one of the comments:
"true -- probably wears the sunglasses while looking at himself in the mirror!!!"
Now, that may be a false statement (though, can he really say he never looked at himself in the mirror with sunglasses?), but does it really qualify to the level of defamatory? Similarly, another of the comments he's suing over reads:
"officer bubbles probably looks at himself in the mirror a lot."
Again, is that really defamatory?
Other comments certainly appear to be mostly opinion, rather than any sort of statement of fact:
"It's a shame that the police are becoming uniformed bullies. It's bad when the local people tell them to leave their community."
and
"Nice going Officer Josephs, you are a real hero and a true testament to the sorry state of law enforcemtn here in Canada, and a fine example of the kind of policing peaceful people had to endure during the G20 farce."
Even in cases where the comments were a bit more stringent, it's hard to see how they could be seen as anything more than angry venting. The Toronto Star spoke to one of the (still anonymous) commenters who said he doesn't even remember what he wrote, but he was just angry about what he had seen. According to the lawsuit, his comment was:
"If this steroid addicted Nazi has children, they must be sooooo embarrassed."
In other words, your typical YouTube-style comment. Sure, you could argue that claiming he was "steroid-addicted" and a "Nazi" might qualify as defamation, but taken in context, would anyone reading that comment really believe that the commenter knew Officers Josephs and was actually alleging he was addicted to steroids and a Nazi, or would they assume that it was just someone upset by the way Officer Josephs acted.
In the meantime, by filing this lawsuit, about the only thing that Office Bubbles has done is call a lot more attention to his initial actions and reinforce the idea that he seems to totally overreact to rather benign situations. But, I guess, if you're going to arrest a girl for blowing bubbles in your direction, suing YouTube (for being a 3rd party platform) and suing people for mocking comments that no one actually believes probably seems to be equally intelligent.
btr1701 passes on the news of how Italy's tourism minister apparently has absolutely no sense of humor. There's an app in the Apple iTunes store for iPhones and iPads called "What Country," which summarizes every country in quick stereotypical snippets. It's meant to be amusing. For example:
Britain is characterised by "tea, weird sense of humour, football hooligans and rain", while Germany is summed up with "beer, discipline and autobahns". China is reduced to "overpopulation, kung fu, Great Wall, Tibet and tea ceremony", while the most defining characteristics of the US are "melting pot, hamburger and the American dream".
As for Italy, well, it's summarized as "pizza, the Mafia and scooters." And, apparently, Italy's tourism minister, Michela Vittoria Brambilla, has such a lack of humor that she declared the app "offensive and unacceptable," demanded that Apple remove it from the store and (most ridiculous of all) is asking the state's attorney to take legal action against the author. Apparently, someone thinks it's illegal in Italy to make a joke about Italy.
We recently wrote about yet another (the third one we know of) ruling in France that found Google liable for what "Google Suggest" suggested. Google Suggest, of course, is the autocomplete function that tries to guess what you're searching on, based on what other people searched on after typing the same letters. Of course, more recently, that's been expanded to Google's Instant Search, where it actually shows full results as you type. We suggested that the problem here was that French courts did not understand the technology.
Journalist Mitch Wagner, who I tend to agree with more often than not, claims that we got it wrong, and that the French courts do understand the technology perfectly fine: and they still decided to side against Google (and, separately, we should mention against Google's CEO, as if he had anything to do with the suggestions in question):
But actually the French court understands what's going on. Google raised just those issues in its defense, and the court disagreed. "The court ruling took issue with that line of argument, stating that 'algorithms or software begin in the human mind before they are implemented,' and noting that Google presented no actual evidence that its search suggestions were generated solely from previous related searches, without human intervention," according to Computerworld.
He goes on to suggest that it can (sort of) be compared to a product liability case, where if you make a product that does something "bad" (such as suggest libelous search results) that it should be your responsibility:
Is it appropriate for Google to build a search engine that automatically generates results with no intervention to be sure those results aren't libelous, defamatory, or otherwise harmful?
This is a problem that goes beyond people accused of crimes. Many companies are unhappy with the results that comes up when you search on industry terms. If you make hats, and you're not in the first page of results that come up when searching the word "hats," then you're dissatisfied with Google. Does that make Google wrong? Does it matter if your hats are, in fact, better and more popular than companies with search terms ranked higher?
I'm sorry, but I don't buy it. I understand Wagner's point, but I think the French courts still don't really understand the issues. It's not a question of whether or not it's appropriate, it's a question of whether or not it's even possible. How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous? As for the different rankings, those are opinions, which should be protected speech (last we checked). If Google's results aren't good, that's an opening for another search engine. Blaming Google because you don't like how the algorithm works is still a mistake, and I don't think the French courts really recognize this at all, no matter what they say.
Not again. Earlier this year, we wrote about a couple of lawsuits in France involving companies that wanted to blame Google for the results that pop up via Google Suggest (the feature that simply looks at what you're typing, and suggests the most popular searches based on what you've typed). Tragically, we noted that the courts wanted to find Google liable for these suggestions, and it looks like yet another French court has made that same mistake. This is, quite clearly, blaming Google for what its users are doing. The algorithm for Google Suggest is just taking the aggregate results of what people are actually searching on, and using that to make suggestions. There's no editorial control by Google, and yet this court not only found Google liable for those suggestions, but ordered the removal of all the "offending" query suggestions. In this case, it involved a guy who had been convicted for "corruption of a minor," who got upset "because the plaintiff's name was returned in response to queries on search terms such as rape, satan worshipper, and other things." At some point, you hope that courts and politicians will understand the basics of how technology works, but it seems like we may be waiting a long, long time.
Just a few weeks after a German court ruled that YouTube was somehow responsible for copyright infringement done by users, a Spanish court has ruled in the exact opposite manner. Basically, the court properly recognized that Google is the tool that is used, and that it should not be responsible for the infringing behavior of its users. The court also properly notes that YouTube makes it easy (I'd argue, perhaps too easy) to remove content that a copyright holder believes is infringing. This is, of course, similar to the Viacom ruling here in the US.
It's also no surprise that a Spanish court has ruled this way. Spanish courts have ruled over and over and over and over again that liability should be applied towards the actual infringer, rather than the third party tool provider. This is basic common sense, but it's resulted in a misleading media campaign by the entertainment industry falsely claiming that Spain is somehow weak on copyright.
Properly applying liability to the party actually responsible is not being "weak," it's being accurate and fair. It's nice to see Spain recognize this. Hopefully, Germany figures this out at some point as well.
Last year, we wrote about a court ruling in Argentina that found Google and Yahoo liable for defamation claims, after a celebrity was upset that searches on her name had results that pointed to pornographic websites. There had actually been a similar decision in Argentina the year before as well. It seems silly to blame search engines if people don't like the search results on their name, but that's what happened. Thankfully, however, in an appeal to the first case we linked to above, involving Virginia Da Cunha, the court found that the sites could only be held liable if they were made aware of the "illegal content," and then failed to remove it. In other words, the court is effectively using a notice-and-takedown safeharbor setup. There are still problems with that, but it's a hell of a lot better than automatically fining Google and Yahoo even if those companies had no idea about the fact someone was upset with the search results. Still, it's not great. As another article notes, without an official safe harbor, the only effective way to win cases like this is to have the money to go to court. Even Google and Yahoo are still fighting a bunch of similar lawsuits and will have to keep going through the process, until there's a real safe harbor in place.
We spend a lot of time discussing bad legislation around here, every so often it's nice to hear of some good legislation. Last month, we noted that an anti-libel tourism bill was making its way through Congress, which would protect US citizens from foreign libel judgments on laws that went against the First Amendment. Thankfully, that bill has now been signed into law -- and it may be even better than we initially expected. That's because, at the urging of folks such as Public Citizen, Congress inserted a bit into the law that also extends the important Section 230 safe harbors to this bill.
As you hopefully know, Section 230 safe harbors make sure that liability is properly applied. That is, it says you can't blame an online service provider for actions by its users. This is incredibly important if you believe that liability should be applied to the appropriate parties. However, very few other countries have such safe harbors, leading to regular lawsuits against service providers (often US service providers) for the actions of their users. Now, this law protects US service providers from such judgments.
Perhaps equally important as having this extension in the law is the discussion on the floor about including Section 230 safe harbors, because that's now a part of the Congressional record, where various elected officials make explicit the reasons why Section 230 protections make sense.
The purpose of this provision is to
ensure that libel tourists do not attempt
to chill speech by suing a third-party
interactive computer service,
rather than the actual author of the offending
statement.
In such circumstances, the service
provider would likely take down the allegedly
offending material rather than
face a lawsuit. Providing immunity removes
this unhealthy incentive to take
down material under improper pressure.
This is important, especially at a time when some have been attempting to seriously cripple Section 230 safe harbors by pretending they serve some other purpose outside of the proper application of liability.