Was A French Court Correct In Blaming Google For Its Google Suggest Suggestions?
from the still-not-convinced dept
We recently wrote about yet another (the third one we know of) ruling in France that found Google liable for what "Google Suggest" suggested. Google Suggest, of course, is the autocomplete function that tries to guess what you're searching on, based on what other people searched on after typing the same letters. Of course, more recently, that's been expanded to Google's Instant Search, where it actually shows full results as you type. We suggested that the problem here was that French courts did not understand the technology.Journalist Mitch Wagner, who I tend to agree with more often than not, claims that we got it wrong, and that the French courts do understand the technology perfectly fine: and they still decided to side against Google (and, separately, we should mention against Google's CEO, as if he had anything to do with the suggestions in question):
But actually the French court understands what's going on. Google raised just those issues in its defense, and the court disagreed. "The court ruling took issue with that line of argument, stating that 'algorithms or software begin in the human mind before they are implemented,' and noting that Google presented no actual evidence that its search suggestions were generated solely from previous related searches, without human intervention," according to Computerworld.He goes on to suggest that it can (sort of) be compared to a product liability case, where if you make a product that does something "bad" (such as suggest libelous search results) that it should be your responsibility:
Is it appropriate for Google to build a search engine that automatically generates results with no intervention to be sure those results aren't libelous, defamatory, or otherwise harmful?I'm sorry, but I don't buy it. I understand Wagner's point, but I think the French courts still don't really understand the issues. It's not a question of whether or not it's appropriate, it's a question of whether or not it's even possible. How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous? As for the different rankings, those are opinions, which should be protected speech (last we checked). If Google's results aren't good, that's an opening for another search engine. Blaming Google because you don't like how the algorithm works is still a mistake, and I don't think the French courts really recognize this at all, no matter what they say.
This is a problem that goes beyond people accused of crimes. Many companies are unhappy with the results that comes up when you search on industry terms. If you make hats, and you're not in the first page of results that come up when searching the word "hats," then you're dissatisfied with Google. Does that make Google wrong? Does it matter if your hats are, in fact, better and more popular than companies with search terms ranked higher?
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: defamation, france, liability
Companies: google
Reader Comments
Subscribe: RSS
View by: Time | Thread
protected speech
maybe in the US , but not in France, where some opinions
can't be expressed legally. for instance racist speeches.
[ link to this | view in thread ]
France
I love how, software is purely an idea... until it's patent time, then it's an "Invention".
[ link to this | view in thread ]
[ link to this | view in thread ]
Product liability
Such a liability, if taken seriously, would shut down the whole of computing. All software has bugs and therefore can produce undesired results. Most software vendors have a pretty all-embracing liability disclaimer in their license agreements - and for good reason. Only a small subset of safety-critical software is tested to a high enough standard to allow liability to be accepted - and even then there are occasional problems (the RAF NI security Chinook crash for example).
This strand of computing could not survive on its own. Do we really want to go back to 1939?
[ link to this | view in thread ]
[ link to this | view in thread ]
http://edition.cnn.com/2010/TECH/web/09/29/google.instant.blacklist.mashable/index.html?h pt=Sbin
There is no human possible way to know what it is libelous or not at the moment without human intervention and even then it is not possible to do it with a 100% certainty of the results. The French just don't want the feature apparently. There is no viable solution to this problem that doesn't involve labor intensive, cost intensive and inevitably unreliable solutions, so the only sensible thing to do is to remove that feature from the French views.
[ link to this | view in thread ]
The accessory of current French Alar is the Napoleonic Code.
In the Napoleonic Code there was a de facto presumptin of guilt.
Psychologically if the law does not explicitly permit something it is forbidden.
US law is based on English Law.
In the English Law there was a de facto presumptin of innocence.
Philosophically, if something is not forbidden in law it is permitted.
[ link to this | view in thread ]
What ?!
The nerve of those people - I'll sue them for that !
[ link to this | view in thread ]
In the Napoleonic Code there was a de facto presumptin of guilt."
This is totally false !
In France there is also the "présomption d'innocence"
http://en.wikipedia.org/wiki/Presumption_of_innocence
"Psychologically if the law does not explicitly permit something it is forbidden."
hahaha can you really imagine that ?
[ link to this | view in thread ]
Re: France
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
In France there is also the "présomption d'innocence"
http://en.wikipedia.org/wiki/Presumption_of_innocence
false or not are you seriously citing wikipedia as a credible source of information?!
[ link to this | view in thread ]
[ link to this | view in thread ]
French?
A) Completely Ignorant of Computer Matters
B) Completely Clueness in General
C) Have stock with the companies bringing the lawsuits
D) Drunk
E) Have a croissant up their butts
F) All of the Above
[ link to this | view in thread ]
Standards
[ link to this | view in thread ]
Re: Off topic
Ummmm. I hear a lot of people down wikipedia as not credible. Usually those people have done no research and have nothing to dispute information they are presented from wikipedia.
Im just bringing this up because I generally use wikipedia and then cross reference with other sources, depending on the topic. I've found it to be very accurate. So Im wondering why so many people with no alternatives feel otherwise.
[ link to this | view in thread ]
?
[ link to this | view in thread ]
Re: Re: France
[ link to this | view in thread ]
Re:
Are you seriously that out of touch?
[ link to this | view in thread ]
Re: French?
G) Enraged at being called, "surrender monkeys"
H) Secretly ashamed at really being surrender monkeys
I) Jealous at the influence of the USA and the comparative insignificance of France
J) Mad as hell that English is more important than French
K) Frustrated that they are too stupid to invent their own search engine that is anywhere near as good as Google
L) Generally vindictive and spiteful
M) Full of themselves
N) Suffering from numerous other personality defects
[ link to this | view in thread ]
google is making the claim that this could be what you want to search for, they are not making the statement that Bill Clinton is a moron at all.
How could there claim that "this could be what you want to search for" be untrue?
[ link to this | view in thread ]
Re: Re: French?
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
The "Automated" Defense
So, for example, that I rig up a shotgun with a tripwire on my property to keep "bad guys" out. If it then winds up killing neighborhood children who get on my lawn, should I then be able to just say "Hey, it's not my fault. It's an automated device! There's no way I can make it actually know who's a bad guy and who isn't!" Or should I be held responsible anyway on the grounds that I shouldn't have implemented such a device in that case? But that might discourage my "innovation".
So, should "automation" be a defense, as Mike contends, or not?
[ link to this | view in thread ]
Re: The "Automated" Defense
The google example isn't really automated either, it's triggered by people searching for text.
The question is really if the tool maker should be held responsible for the actions of the tool's users. The search tool doesn't do anything automatically.
[ link to this | view in thread ]
Searches
[ link to this | view in thread ]
Re: The "Automated" Defense
Even if you want to equate the two, you are actively setting up a system that can do physical harm. What's happening with Google is that users (not Google itself) are searching for "XX is YY" (where X is a name and Y is a negative adjective). The algorithm then notes that so many thousands/millions of people have searched for "XX is YY", and if I start typing "XX is" it will add in "YY", because more than likely, that's what I'm searching for.
Here's an example that I thought up. Say, a library hosts books and has a fancy robotic mechanism that picks up and deposits books in front of me based on what I search for. Say I type into the computer "The Holocaust Didn't Happen" or "Politician X is a Hypocryte", and it dumps books that are about what I searched for, and it gets more accurate based on a user telling the computer "Yes, this book is pertinant to the topic". Is the library at fault? They didn't write the books, they merely have them on a shelf. The computer doesn't know its libelous. Should the books be consigned to being unknown because its against the law to search for something libelous?
[ link to this | view in thread ]
Re: The "Automated" Defense
You didn't really mean to make that argument, did you?
We're not saying it's okay because it's "automated," but because it's a function of what the overall users do. Users searched on those terms, it's accurate.
Besides, setting up a gun to shoot people is to set up a system specifically designed to perform an illegal act. Reporting what people are searching for is not.
[ link to this | view in thread ]
Re: ?
[ link to this | view in thread ]
Re: The "Automated" Defense
> that does bad things, should I not then be
> responsible for what it does?
Your argument assumes that factually showing what people around the world are searching for is a "bad thing".
[ link to this | view in thread ]
Here's a scenario the French court didn't think about
For example, take the David Beckham case de jour, where a prostitute has claimed he paid her to sleep with him. What if in twenty years, I write a biography of David Beckham, and I want to write about this episode in his life. So I type into Google "David Beckham Prostitute", and it spits out links to her blog or something, which will be kept in some archive.
[ link to this | view in thread ]
Doesn't this come a little too close to questioning safe harbour provisions? I can't address the second part of this statement (though it would surprise me greatly if more than a handful of possible suggestions had been subject to human intervention), but this blog's comment system, for example, was conceived in the human mind too. Yet if I were to write something libellous here, Techdirt would rightly not be held liable despite republishing my comments to the world.
[ link to this | view in thread ]
Google already screens Google Instant for search results that are inoffensive, so screening results is not ridiculous.
I'm not saying I agree with the French courts here. I'm concerned that we're creating a future where public perception trumps reality, and if most of the Google-using population believes a thing to be true, Google will spit it back, even if that thing is actually false.
[ link to this | view in thread ]
Re: Re: The "Automated" Defense
It's a question, not an argument. Please learn the difference. In fact, it's actually questioning *your* argument. Sorry.
We're not saying it's okay because it's "automated,"...
"How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous?" seems to be asking that question. Likewise, how does one rig up a shotgun booby-trap that "simply knows" when it is firing in a way that might be considered by a court of law to be a reasonable level of force in a particular situation? Could it be that if one can't then maybe, just maybe, they shouldn't be rigging up such a thing?
Besides, setting up a gun to shoot people is to set up a system specifically designed to perform an illegal act.
So you're telling me that it is illegal to shoot people, in any situation, where you are out there in California? I'm not familiar with California law so I'll just have to take your word on that, but I would ask you: if it is illegal to shoot people in any situation in California, then why is it that every California cop I've seen has a gun? Decoration? Interesting.
Now, where I live it is not illegal to shoot people in certain situations, so as far as I'm concerned your argument to the contrary fails on factual grounds. But, the courts here have ruled though that setting up shotgun booby-traps is illegal because they may fire even when they shouldn't. In other words, automation is no excuse around here for doing something that would otherwise be illegal.
Reporting what people are searching for is not.
Apparently a French court disagrees with you. Somehow, I have a feeling that they're not letting you dictate otherwise to them, either.
[ link to this | view in thread ]
Re: Re: The "Automated" Defense
Not, it doesn't. However, that does seem to be the determination that was made by the court. If you have a problem that, then perhaps you should address your concerns to the court.
[ link to this | view in thread ]
Re: Re: The "Automated" Defense
Straw man alert: nobody was saying it was.
Even if you want to equate the two...
Umm, no, that was *your* strawman.
[ link to this | view in thread ]
Re: Re: Re: The "Automated" Defense
> > the world are searching for is a "bad thing".
> Not, it doesn't. However, that does seem to be the determination that
> was made by the court.
And such a ruling is logically and philosophically incompatible with a free society, so I guess the French judiciary has tacitly admitted that they're no longer living in one.
> If you have a problem that, then perhaps you should address your
> concerns to the court.
Or I could just address them here, like I did. How's that?
[ link to this | view in thread ]
Re: Re: Re: Re: The "Automated" Defense
I think the word is "ineffectual".
[ link to this | view in thread ]