Another Ridiculous Lawsuit Hopes To Hold Social Media Companies Responsible For Terrorist Attacks
from the from-an-alternate-reality-where-Section-230-doesn't-exist dept
Yet another lawsuit has been filed against social media companies hoping to hold them responsible for terrorist acts. The family of an American victim of a terrorist attack in Europe is suing Twitter, Facebook, and Google for providing material support to terrorists. [h/t Eric Goldman]
The lawsuit [PDF] is long and detailed, describing the rise of ISIS and use of social media by the terrorist group. It may be an interesting history lesson, but it's all meant to steer judges towards finding violations of anti-terrorism laws rather than recognize the obvious immunity given to third party platforms by Section 230.
When it does finally get around to discussing the issue, the complaint from 1-800-LAW-FIRM (not its first Twitter terrorism rodeo…) attacks immunity from an unsurprising angle. The suit attempts to portray the placement of ads on alleged terrorist content as somehow being equivalent to Google, Twitter, et al creating the terrorist content themselves.
When individuals look at a page on one of Defendants’ sites that contains postings and advertisements, that configuration has been created by Defendants. In other words, a viewer does not simply see a posting; nor does the viewer see just an advertisement. Defendants create a composite page of content from multiple sources.
Defendants create this page by selecting which advertisement to match with the content on the page. This selection is done by Defendants’ proprietary algorithms that select the advertisement based on information about the viewer and the content being. Thus there is a content triangle matching the postings, advertisements, and viewers.
Although Defendants have not created the posting, nor have they created the advertisement, Defendants have created new unique content by choosing which advertisement to combine with the posting with knowledge about the viewer.
Thus, Defendants’ active involvement in combining certain advertisements with certain postings for specific viewers means that Defendants are not simply passing along content created by third parties; rather, Defendants have incorporated ISIS postings along with advertisements matched to the viewer to create new content for which Defendants earn revenue, and thus providing material support to ISIS.
This argument isn't going to be enough to bypass Section 230 immunity. According to the law, the only thing social media companies are responsible for is the content of the ads they place. That they're placed next to alleged terrorist content may be unseemly, but it's not enough to hurdle Section 230 protections. Whatever moderation these companies engage in does not undercut these protections, even when their moderation efforts fail to weed out all terrorist content.
The lawsuit then moves on to making conclusory statements about these companies' efforts to moderate content, starting with an assertion not backed by the text of filing.
Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network.
Following this sweeping assertion, two (2) tech experts are cited, both of whom appear to be only speaking for themselves. More assertions follow, with 1-800-LAW-FIRM drawing its own conclusions about how "easy" it would be for social media companies with millions of users to block the creation of terrorism-linked accounts [but how, if nothing is known of the content of posts until after the account is created?] and to eliminate terrorist content as soon as it goes live.
The complaint then provides an apparently infallible plan for preventing the creation of "terrorist" accounts. Noting the incremental numbering used by accounts repeatedly banned/deleted by Twitter, the complaint offers this "solution."
What the above example clearly demonstrates is that there is a pattern that is easily detectable without reference to the content. As such, a content-neutral algorithm could be easily developed that would prohibit the above behavior. First, there is a text prefix to the username that contains a numerical suffix. When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.
Prohibiting this conduct would be simple for Defendants to implement and not impinge upon the utility of Defendants sites. There is no legitimate purpose for allowing the use of fixed prefix/incremental numerical suffix name.
Take a long, hard look at that last sentence. This is the sort of assertion someone makes when they clearly don't understand the subject matter. There are plenty of "legitimate purposes" for appending incremental numerical suffixes to social media handles. By doing this, multiple users can have the same preferred handle while allowing the system (and the users' friends/followers) to differentiate between similarly-named accounts. Everyone who isn't the first person to claim a certain handle knows the pain of being second... third… one-thousand-three-hundred-sixty-seventh in line. While this nomenclature process may allow terrorists to easily reclaim followers after account deletion, there are plenty of non-ominous reasons for allowing incremental suffixes.
That's indicative of the lawsuit's mindset: terrorist attacks are the fault of social media platforms because they've "allowed" terrorists to communicate. But that's completely the wrong party to hold responsible. Terrorist attacks are performed by terrorists, not social media companies, no matter how many ads have been placed around content litigants view as promoting terrorism.
Finally, the lawsuit sums it all up thusly: Monitoring content is easy -- therefore, any perceived lack of moderation is tantamount to direct support of terrorist activity.
Because the suspicious activity used by ISIS and other nefarious organizations engaged in illegal activities is easily detectable and preventable and that Defendants are fully aware that these organizations are using their networks to engage in illegal activity demonstrates that Defendants are acting knowingly and recklessly allowing such illegal conduct.
Unbelievably, the lawsuit continues from there, going past its "material support" Section 230 dodge to add claims of wrongful death it tries to directly link to Twitter, et al's allegedly inadequate content moderation.
The conduct of each Defendant was a direct, foreseeable and proximate cause of the wrongful deaths of Plaintiffs’ Decedent and therefore the Defendants’ are liable to Plaintiffs for their wrongful deaths.
This is probably the worst "Twitter terrorism" lawsuit filed yet, but quite possibly exactly what you would expect from a law firm with a history of stupid social media lawsuits and a phone number for a name.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, isis, material support, section 230, social media, terrorism
Companies: 1-800-law-firm, facebook, google, twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
Are they responsible for even that much?
If so, do I have a legal cause of action against the maintainers of a website if I go there and an ad tries to send malware to my computer?
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Exactly! Use of an outdated identifier scheme like phone numbers shows just how far behind the times they are. They really need to rebrand to name themselves after a URL, the modern way to uniquely identify oneself.
[ link to this | view in thread ]
Wrong documents
[ link to this | view in thread ]
Things that are obviously impossible:
people who are not terrorists using numbered prefixes
terrorists using letters for prefixes
terrorists attaching numbers in the middle of the username
terrorists attaching letters in the middle of the username
terrorists using numbered suffixes
terrorists using lettered suffixes
terrorists changing some of the existing letters in the username
...
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Ads are not helping the terrorists.
Adding ads to terrorist content is an effective way to reduce its reach!
[ link to this | view in thread ]
Also: Banks, landlords, shops... and the governments claiming to protect us pave roads for them. Deliver mail!
Sue them all.
[ link to this | view in thread ]
Re:
(Shit, I used terro*** too. Sue me.)
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Re: Re:
[ link to this | view in thread ]
Re:
Don't give them ideas, with their track record I would not be surprised it they actually tried to sue those others.
[ link to this | view in thread ]
"could and should"
Really? Give real, workable examples of HOW to do this, please?
[ link to this | view in thread ]
Re: "could and should"
'Nerd harder', clearly if they do that then they can do it, as all things are possible if one merely nerds harder.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: "could and should"
Then you could set a bit in the stream ... call it the terrorist bit, and then all browsers could detect said terrorist bit and not display the terrorist content.
You would then need to generate a lot of fake traffic to the site, dont want them getting suspicious. Should be real easy to create some bots that act like real people ... no one will notice or suspect a thing, they could even use the phone book to generate names.
[ link to this | view in thread ]
Re: Ads are not helping the terrorists.
(Oops. Begin lawsuits aimed at Adblock Plus, Ghostery, Disconnect, etc. for remaining functional for terrorists.)
[ link to this | view in thread ]
I think the telephone network should ban such availability to get sequential numbers, think of the children !
[ link to this | view in thread ]
Re:
Ban them already.
[ link to this | view in thread ]
[ link to this | view in thread ]