Families Of Orlando Shooting Victims Sue Twitter, Facebook, And Google For 'Supporting Terrorism'
from the worst-attempt-yet dept
Remember that time when Google, Twitter, and Facebook helped shoot up a nightclub in Orlando, Florida? Me neither. But attorney Keith Altman does. He's representing the families of three of the victims of the Pulse nightclub shooting in a lawsuit alleging [sigh] that these tech companies are somehow responsible for this act of terrorism.
The lawsuit, first reported by Fox News, was filed Monday in federal court in the eastern district of Michigan on behalf of the families of Tevin Crosby, Javier Jorge-Reyes and Juan Ramon Guerrero.
The lawsuit is the latest to target popular Internet services for making it too easy for the Islamic State to spread its message.
Like many similar lawsuits, this one is doomed to fail. First off, Section 230 immunizes these companies from being held responsible for third-party content. As this is certainly the first obstacle standing in the way of the suit's success, Altman has presented a very novel argument in hopes of avoiding it: ad placement is first party content, so immunity should be removed even when ads are attached to third-party content. From the filing [PDF]:
By specifically targeting advertisements based on viewers and content, Defendants are no longer simply passing through the content of third parties. Defendants are themselves creating content because Defendants exercise control over what advertisement to match with an ISIS posting. Furthermore, Defendants’ profits are enhanced by charging advertisers extra for targeting advertisements at viewers based upon knowledge of the viewer and the content being viewed.
[...]
Given that ad placement on videos requires Google’s specific approval of the video according to Google’s terms and conditions, any video which is associated with advertising has been approved by Google.
Because ads appear on the above video posted by ISIS, this means that Google specifically approved the video for monetization, Google earned revenue from each view of this video, and Google shared the revenue with ISIS. As a result, Google provides material support to ISIS.
That's the 230 dodge presented in this lawsuit. The same goes for Twitter and Facebook, which also place ads into users' streams -- although any sort of "attachment" is a matter of perception (directly proceeding/following "terrorist" third-party content, but not placed on the content). YouTube ads are pre-roll and are part of an automated process. The lawsuit claims ISIS is profiting from ad revenue, but that remains to be seen. Collecting ad revenue involves a verification process which actual terrorists may not be willing to participate in.
Going beyond this, the accusations are even more nebulous. The filing asserts that each of the named companies could "easily" do more to prevent use of their platforms by terrorists. To back up this assertion, the plaintiffs quote two tech experts (while portraying their thoughts as being representative of "most" experts) that say shutting down terrorist communications would be easy.
Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network. “When Twitter says, ‘We can’t do this,’ I don’t believe that,” said Hany Farid, chairman of the computer science department at Dartmouth College. Mr. Farid, who co-developed a child pornography tracking system with Microsoft, says that the same technology could be applied to terror content, so long as companies were motivated to do so. “There’s no fundamental technology or engineering limitation,” he said. “This is a business or policy decision. Unless the companies have decided that they just can’t be bothered.”
According to Rita Katz, the director of SITE Intelligence Group, “Twitter is not doing enough. With the technology Twitter has, they can immediately stop these accounts, but they have done nothing to stop the dissemination and recruitment of lone wolf terrorists.”
Neither expert explains how speech can so easily be determined to be terrorism or how blanket filtering/account blocking wouldn't result in a sizable amount of collateral damage to innocent users. Mr. Farid, in particular, seems to believe sussing out terrorist-supporting speech should be as easy as flagging known child porn with distinct hashes. A tweet isn't a JPEG and speech can't be as easily determined to be harmful. It's easier said than done, but the argument here is the same as the FBI's argument in respect to "solving" the encryption "problem:" the smart people could figure this out. They're just not trying.
Altman's suggestion is even worse: just prevent Twitter accounts from being created that use any part of the handle of a previously-blocked account.
When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.
It's all so easy when you're in the business of holding US-based tech companies responsible for acts of worldwide terrorism. First, this solves nothing. If the incremental option goes away, new accounts will be created with other names. Pretty soon, a great deal of innocuous handles will be auto-flagged by the system, preventing users from creating accounts with the handle they'd prefer -- including users who've never had anything to do with terrorism. Seriously a stupid idea, especially since the Twitter handle used in the example is "DriftOne" -- a completely innocuous handle the plaintiffs would like to see treated as inherently suspicious.
And thank your various gods this attorney isn't an elected official, law enforcement officer, or holding a supervisory role at an intelligence agency. Because this assertion would be less ridiculous and more frightening if delivered by any of the above:
Sending out large numbers of requests to connect with friends/followers from a newly created account is also suspicious activity. As shown in the “DriftOne” example above, it is clear that this individual must be keeping track of those previously connected. When an account is taken down and then re-established, the individual then uses an automated method to send out requests to all those members previously connected. Thus, accounts for ISIS and others can quickly reconstitute after being deleted. Such activity is suspicious on its face.
We've seen a lot of ridiculous lawsuits fired off in the wake of tragedies, but this one appears to be the worst one yet.
The lawsuit asks the court to sidestep Section 230 and order private companies to start restricting speech on their platforms. That's censorship and that's basically what the plaintiffs want -- along with fees, damages, etc. The lawsuit asks for an order finding that the named companies are violating the Anti-Terrorism Act and "grant other and further relief as justice requires."
The lawsuit's allegations are no more sound than the assertion of the Congressman quoted in support of the plaintiffs' extremely novel legal theories:
“Terrorists are using Twitter,” Rep. Poe added, and “[i]t seems like it’s a violation of the law.”
This basically sums up the lawsuit's allegations: this all "seems" wrong and the court needs to fix it. The shooting in Orlando was horrific and tragic. But this effort doesn't fix anything and asks for the government to step in and hold companies accountable for third-party postings under terrorism laws. Not only that, but it encourages the government to pressure these companies into proactive censorship based on little more than some half-baked assumptions about how the platforms work and what tech fixes they could conceivably apply with minimal collateral damage.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, isis, keith altman, material support for terrorism, omar mateen, orlando, pulse nightclub, section 230, terrorism, victims
Companies: facebook, google, twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in thread ]
Most technology experts
Even Wikipedia would flag this as being "Weasel Words" and a lawyer is trying to build a case around this premise?
[ link to this | view in thread ]
[ link to this | view in thread ]
[ link to this | view in thread ]
Target speech with ads, not censorship?
[ link to this | view in thread ]
[ link to this | view in thread ]
Clock Zero
[ link to this | view in thread ]
If I just wanted to watch the world burn, I'd email Rep. Poe and demand to know why the already existing and perfectly good No Fly and Terrorist Watch lists aren't being supplied to Twitter, Facebook and Google and all the ISPs to keep those on the lists from using their services.
I'd cc Trump, McCain and others, of course.
According to Wikipedia there were 10,000 names on the No Fly list in 2011, 21,000 in 2012, and 47,000 in 2013. (How long can they double it every year?) The separate Terrorist Watch List is already up to two million people.
[ link to this | view in thread ]
Re:
Families: "We lost loved ones. But some cash could make us forget all about it."
[ link to this | view in thread ]
Numerical suffix
A JohnSmith gets the first allowed instance, so another John Smith can't can't create an account, 'cause JohnSmith2 isn't allowed.
My ID as an example afn29129.. the 29129th account that was created on Alachua Freenet.
[ link to this | view in thread ]
Third party liability : gun manufacturers
[ link to this | view in thread ]
They are supporting terrorism, but the lawyers don't know it
The same group of Jared Cohen, Yasmin Dolatabati, Sasha Havlicek, Vidhya Ramalingham, and Quintan Witkorwitz runs the European Commission's national security program. This program was developed in response to the Anders Breivik shooting and is centered on the policy of increasing Muslim immigration to Europe to give the public more experience with diversity and to create a large voting bloc that will prevent the resurgence of white supremacy. They are using the power of the deep state to suppress any political opposition as if it were the second coming of Hitler. This is all justified as "countering violent extremism" because Breivik was a violent extremist.
They in turn are run by the Safa Group, an al-Qaeda financier funded by drug money and protected by Saudi intelligence operatives high up in the IC. The Safa Group was given authority over them by a person in the White House whose college expenses were paid by Saudi Arabia. Hillary Clinton authorized the placement of two Safa Group operatives inside the Berkman Center for Internet and Society where they educated key persons of influence like Larry Lessig and Jimbo Wales about Islamophobia. The rumor mill says the DIA twisted Comey's nuts to reopen the email investigation after learning about this.
Twitter is controlled by the Saudis through direct ownership of a large number of shares, through Saudi operative JP Mahew, and through the Islamist-controlled United Nations Alliance of Civilizations and its Soliya project. If you follow the unexplainable bans, most of the targets are either critics of censorship or critics of Islam.
Facebook is under pressure from Germany with threats to arrest Facebook employees and staged riots at Facebook's offices. Facebook has Muslim Brothers (extreme fundies) on its moderation staff and allows them to run rampant censoring information because they would put up a huge fuss otherwise. Zuckerberg has someone telling him what a good Jew he is for working against Islamophobia.
The purpose of all of these new codes of conduct is to sneak in a subsection that will justify banning people who criticize Islam or its adherents. You don't like throwing people off rooftops for being gay? You get banned and blacklisted. What do the gays say about this? People close to the top of GLAAD and GLSEN made some new and wealthy friends running arms through Benghazi so they're all for it. The rest of us are fatigued from seeing people getting banned for trivial reasons everywhere for years, so we won't be outraged when it happens.
Google recruited the PR officers for most of Silicon Valley. The executives only know what Google wants them to know.
[ link to this | view in thread ]
Terrorists
Yes, Congressman; yes they are, and good on you for calling them out! Wait - you weren't talking about Altman and Katz? Never mind.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Missing the point
[ link to this | view in thread ]
Re: Target speech with ads, not censorship?
[ link to this | view in thread ]
Re:
I imagine up to around 111,900,000 when they run out of non-whites to add to it.
[ link to this | view in thread ]
Re: Missing the point
[ link to this | view in thread ]
Most of their arguments are bunk BUT
But this one:
Given that ad placement on videos requires Google’s specific approval of the video according to Google’s terms and conditions, any video which is associated with advertising has been approved by Google.
Because ads appear on the above video posted by ISIS, this means that Google specifically approved the video for monetization, Google earned revenue from each view of this video, and Google shared the revenue with ISIS. As a result, Google provides material support to ISIS.
might have some traction because it appears that Google have exercised direct editorial control in this case - at least according to what their t's and c's say.
The fact that in practice they don't review this material doesn't get them off the hook.
Of course it would be trivial for Google to change their terms to get off this particular hook - but I guess they have some other reason why they don't want to.
[ link to this | view in thread ]
Ban Breathing
“Terrorists are using Twitter,” Rep. Poe added, and “[i]t seems like it’s a violation of the law.”
Terrorists are breathing air! This seems wrong, we should ban air!
[ link to this | view in thread ]
Re: Re: Target speech with ads, not censorship?
[ link to this | view in thread ]
Shooting the messenger is all the rage these day.
[ link to this | view in thread ]
[ link to this | view in thread ]
The key word there is *known*. Systems don't automate the determination of whether the content is illegal in the first place, they merely use hashes to detect when the same images are reported. On top of that, child porn is defacto illegal, there's no context in which it can be acceptable. Words, however, can mean very different things depending on context where they may be innocent speech or "terrorist words". You can't therefore filter the words without filtering legitimate speech, then if some previously undetected "terrorist" speech makes it through and it will), the company are suddenly liable.
That's the problem with these kinds of arguments. They understand superficially what can be done, but they don't understand the fundamental differences in practice.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Re: Re: Target speech with ads, not censorship?
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re:
No Tweet List
No FaceBook List
No Google List
This is gonna be awesome. I want a list of lists, and make it meta.
[ link to this | view in thread ]
Re: Terrorists
[ link to this | view in thread ]
Re: They are supporting terrorism, but the lawyers don't know it
1 The Saudis have made substantial investments in social media.
https://www.almasdarnews.com/article/saudi-arabia-might-using-money-control-social-media/
2. Facebook has been known to ban anti-islam speech quite readily.
http://www.faithfreedom.org/facebook-is-enforcing-islamic-blasphemy-laws/
3. Zuckerberg was overheard discussing with Merkel how to silence critics of her immigration policy.
[ link to this | view in thread ]
Re:
IN practice the tech companies do censor a lot of stuff when it suits them - and of course people on this forum say that this is fine because they are private companies and can do what they want - but apart from the obvious implications for free speech when a major platform does this - it will also encourage the technically illiterate to believe that it can be done in a universally watertight way.
[ link to this | view in thread ]
Down with Elvis's pelvis! It must be CENSORED!
[ link to this | view in thread ]
Re: Re: They are supporting terrorism, but the lawyers don't know it
But, for the record:
"The Saudis have made substantial investments in social media"
...and numerous other things such as the "traditional" media (Fox springs to mind, though the level of investment is often somewhat exaggerated). This fact is irrelevant unless you can show that a) the investment overshadows other forms of media & communications and b) they have some actual control over content, that's interesting trivia but not directly relevant. We're talking common sense reasons why they would not censor content, no conspiracies required.
"Facebook has been known to ban anti-islam speech quite readily"
I don't think that anyone is saying they haven't. The problem is, they can't possibly be expected to police all of it before it's posted, nor should they be held directly responsible for the content or effect of any that slips through the net. Willingness to censor in extreme circumstances but unable to censor everything is not a contradiction.
It's also worth noting that any overzealous censorship of such things always leads to further conspiracy theories (e.g. if they remove some right-wing US organisation's similar post while cracking down on ISIS posts, they're accused of removing it because they support ISIS).
"Zuckerberg was overheard discussing with Merkel how to silence critics of her immigration policy."
I notice this is the one you didn't provide a citation for. It's a larger conspiracy theory that reaches far beyond what's being discussed here, however. Even if true, was there any actual taken, or a private conversation that sounds scary taken out of context. Was it recorded or just "verheard", meaning we're depending on the account of a 3rd party - and if so, what are their biases? Etc.
Even if it happened, what do they mean by "critics". Anyone who dares question them, or organised groups deliberately spreading inflammatory lies about said immigrants? There's a difference.
[ link to this | view in thread ]
Re: Shooting the messenger is all the rage these day.
[ link to this | view in thread ]
Most people agree this is a dumb case
Anyway, can lawyers like this get disbarred for taking advantage of tragedies?
The families of the Pulse shooting have gone through enough and they don't need a scammy lawyer giving them false hope that someone Twitter and Facebook "will be held responsible"... meaning "I can file a lawsuit and hopefully get a lot of money for you".
[ link to this | view in thread ]
Re: Most people agree this is a dumb case
[ link to this | view in thread ]
Re: Third party liability : gun manufacturers
Nope, I've got nothing. Maybe my fellow readers from across the Pond can help me out: this country's government makes it very hard for citizens to obtain guns. Requirements: https://www.gov.uk/.../Guidance_on_Firearms_Licensing_Law_April_2016_v20.pdf
Now I know from my experience of reading TD comments that cause =/= correlation, and vice versa. That being the case, how come a) the fact that criminals can and do get hold of guns hasn't resulted in a mad spree shooting within the last ten years (correct me if I'm wrong) and b) how come we citizens aren't subject to the parade of horribles our gun-loving American cousins are forever warning us will happen if the citizens are disarmed?
[ link to this | view in thread ]