Families Of Orlando Shooting Victims Sue Twitter, Facebook, And Google For 'Supporting Terrorism'
from the worst-attempt-yet dept
Remember that time when Google, Twitter, and Facebook helped shoot up a nightclub in Orlando, Florida? Me neither. But attorney Keith Altman does. He's representing the families of three of the victims of the Pulse nightclub shooting in a lawsuit alleging [sigh] that these tech companies are somehow responsible for this act of terrorism.
The lawsuit, first reported by Fox News, was filed Monday in federal court in the eastern district of Michigan on behalf of the families of Tevin Crosby, Javier Jorge-Reyes and Juan Ramon Guerrero.
The lawsuit is the latest to target popular Internet services for making it too easy for the Islamic State to spread its message.
Like many similar lawsuits, this one is doomed to fail. First off, Section 230 immunizes these companies from being held responsible for third-party content. As this is certainly the first obstacle standing in the way of the suit's success, Altman has presented a very novel argument in hopes of avoiding it: ad placement is first party content, so immunity should be removed even when ads are attached to third-party content. From the filing [PDF]:
By specifically targeting advertisements based on viewers and content, Defendants are no longer simply passing through the content of third parties. Defendants are themselves creating content because Defendants exercise control over what advertisement to match with an ISIS posting. Furthermore, Defendants’ profits are enhanced by charging advertisers extra for targeting advertisements at viewers based upon knowledge of the viewer and the content being viewed.
[...]
Given that ad placement on videos requires Google’s specific approval of the video according to Google’s terms and conditions, any video which is associated with advertising has been approved by Google.
Because ads appear on the above video posted by ISIS, this means that Google specifically approved the video for monetization, Google earned revenue from each view of this video, and Google shared the revenue with ISIS. As a result, Google provides material support to ISIS.
That's the 230 dodge presented in this lawsuit. The same goes for Twitter and Facebook, which also place ads into users' streams -- although any sort of "attachment" is a matter of perception (directly proceeding/following "terrorist" third-party content, but not placed on the content). YouTube ads are pre-roll and are part of an automated process. The lawsuit claims ISIS is profiting from ad revenue, but that remains to be seen. Collecting ad revenue involves a verification process which actual terrorists may not be willing to participate in.
Going beyond this, the accusations are even more nebulous. The filing asserts that each of the named companies could "easily" do more to prevent use of their platforms by terrorists. To back up this assertion, the plaintiffs quote two tech experts (while portraying their thoughts as being representative of "most" experts) that say shutting down terrorist communications would be easy.
Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network. “When Twitter says, ‘We can’t do this,’ I don’t believe that,” said Hany Farid, chairman of the computer science department at Dartmouth College. Mr. Farid, who co-developed a child pornography tracking system with Microsoft, says that the same technology could be applied to terror content, so long as companies were motivated to do so. “There’s no fundamental technology or engineering limitation,” he said. “This is a business or policy decision. Unless the companies have decided that they just can’t be bothered.”
According to Rita Katz, the director of SITE Intelligence Group, “Twitter is not doing enough. With the technology Twitter has, they can immediately stop these accounts, but they have done nothing to stop the dissemination and recruitment of lone wolf terrorists.”
Neither expert explains how speech can so easily be determined to be terrorism or how blanket filtering/account blocking wouldn't result in a sizable amount of collateral damage to innocent users. Mr. Farid, in particular, seems to believe sussing out terrorist-supporting speech should be as easy as flagging known child porn with distinct hashes. A tweet isn't a JPEG and speech can't be as easily determined to be harmful. It's easier said than done, but the argument here is the same as the FBI's argument in respect to "solving" the encryption "problem:" the smart people could figure this out. They're just not trying.
Altman's suggestion is even worse: just prevent Twitter accounts from being created that use any part of the handle of a previously-blocked account.
When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.
It's all so easy when you're in the business of holding US-based tech companies responsible for acts of worldwide terrorism. First, this solves nothing. If the incremental option goes away, new accounts will be created with other names. Pretty soon, a great deal of innocuous handles will be auto-flagged by the system, preventing users from creating accounts with the handle they'd prefer -- including users who've never had anything to do with terrorism. Seriously a stupid idea, especially since the Twitter handle used in the example is "DriftOne" -- a completely innocuous handle the plaintiffs would like to see treated as inherently suspicious.
And thank your various gods this attorney isn't an elected official, law enforcement officer, or holding a supervisory role at an intelligence agency. Because this assertion would be less ridiculous and more frightening if delivered by any of the above:
Sending out large numbers of requests to connect with friends/followers from a newly created account is also suspicious activity. As shown in the “DriftOne” example above, it is clear that this individual must be keeping track of those previously connected. When an account is taken down and then re-established, the individual then uses an automated method to send out requests to all those members previously connected. Thus, accounts for ISIS and others can quickly reconstitute after being deleted. Such activity is suspicious on its face.
We've seen a lot of ridiculous lawsuits fired off in the wake of tragedies, but this one appears to be the worst one yet.
The lawsuit asks the court to sidestep Section 230 and order private companies to start restricting speech on their platforms. That's censorship and that's basically what the plaintiffs want -- along with fees, damages, etc. The lawsuit asks for an order finding that the named companies are violating the Anti-Terrorism Act and "grant other and further relief as justice requires."
The lawsuit's allegations are no more sound than the assertion of the Congressman quoted in support of the plaintiffs' extremely novel legal theories:
“Terrorists are using Twitter,” Rep. Poe added, and “[i]t seems like it’s a violation of the law.”
This basically sums up the lawsuit's allegations: this all "seems" wrong and the court needs to fix it. The shooting in Orlando was horrific and tragic. But this effort doesn't fix anything and asks for the government to step in and hold companies accountable for third-party postings under terrorism laws. Not only that, but it encourages the government to pressure these companies into proactive censorship based on little more than some half-baked assumptions about how the platforms work and what tech fixes they could conceivably apply with minimal collateral damage.
Filed Under: cda 230, isis, keith altman, material support for terrorism, omar mateen, orlando, pulse nightclub, section 230, terrorism, victims
Companies: facebook, google, twitter