from the and-for-fun,-the-cfaa-and-scraping dept
Last month, at the COMO Content Moderation Summit in Washington DC, I co-ran a "You Make the Call" session with Emma Llanso from CDT. The idea was to turn the audience into a content moderation/trust & safety team of a fictionalized social media platform. We showed numerous examples of content or accounts that were "flagged" and then showed the associated terms of service, and had the entire audience vote on what to do. One of the fictional examples involved someone posting a link to a third-party website "contactinfo.com" claiming to have the personal phone and email contact info of Harvey Weinstein and urging people "you know what to do!" with a hashtag. The relevant terms of service included this: "You may not post personal information about others without their consent."
The audience voting was pretty mixed on this. 47% of the audience punted on the question, choosing to escalate it to a supervisor as they felt they couldn't decide whether to leave the content up or take it down. 32% felt it should just be taken down. 10% said to just leave it up and another 10% said to put a content warning flag on the content. We joked a bit during the session that some of these examples were "ripped from the headlines" but apparently we predicted the headlines in this case, because there are two stories this week that touch on exactly this kind of thing.
Example one is the story that came out yesterday, in which Twitter chose to start locking the accounts of users who were either tweeting Trump senior advisor Stephen Miller's cell phone number, or merely linking to a Splinternews article that published his cell phone number (which I'm guessing has since been changed...).
Splinternews decided to publish Miller's phone number after multiple news reports attributed the inhumane* decision to separate children of asylum seekers from their parents to Miller, who has defended the plan. Other reports noted that Miller is enjoying all of the controversy over this policy. Splinternews, citing Donald Trump's own history of giving out the phone numbers of people who anger him, thought it was only fair that people be able to reach out to Miller.
This is -- for fairly obvious reasons -- a controversial decision. I think most news organizations would never do such a thing. Not surprisingly, the number spread rapidly on Twitter, and Twitter started locking all of those accounts until the tweets were removed. That seems at least well within reason under Twitter's rules that explicitly state:
You may not publish or post other people's private information without their express authorization and permission.
But, that question gets a lot sketchier when it comes to locking the accounts of people who merely linked to the Splinternews article. A la our fictionalized example, those people are not actually publishing or posting anyone's private info. They are posting a link to a third party that purports to have that information. And, of course, in this case, the situation is complicated even more than our fictionalized example because Splinternews is a news organization (owned by Univision), and Twitter also has said that it has a "newsworthy" exception to its rules.
Personally, I think it's the wrong call to lock the accounts of those linking to the news story, but... as we discovered in our own sample version, it's not an easy call and lots of people have strong opinions one way or the other. Indeed, part of the reason why Twitter may have decided to do this was that supporters of Trump/Miller started calling out the article as an example of doxxing and claiming that leaving it up showed that Twitter was unfairly biased against them. It is a no win situation.
And, of course, it wouldn't take long before people started coming up with clever workarounds, such as Parker Higgins (citing the infamous 09F9 controversy in which the MPAA tried to censor the revelation of a cryptographic key that broke the MPAA's preferred DRM, and people responded by posting variations on the code, including a color chart in which the hex codes of the colors were the code), who posted the following:
Would Twitter lock his account for posting a two color image? At some point, the whole thing gets... crazy. That's not to argue that revealing someone's private cell phone number is a good thing -- no matter how you feel about Miller or the border policy. But just on the content moderation side, it puts Twitter in a no win situation in which people are going to be pissed off no matter what it does. Oh, and of course, it also helped create something of a Streisand Effect. I certainly hadn't heard about the Splinternews article or that people were passing around Miller's phone number until the story broke about Twitter whac'ing at moles on its site.
And that takes us to the second example, which happened a day earlier -- and was also in response to people's quite reasonable* anger about the border policy. Sam Lavigne decided to make something of a public statement about how he felt about ICE by scraping** LinkedIn for profile information on everyone who works at ICE (and who has a LinkedIn public profile). His database included 1595 ICE employees. He wrote a Medium blog post about this, posted the repository to Github and another user, Russel Neiss, created a Twitter account (@iceHRgov) that tweeted out info about each of those employees from that database. Notice that none of those are linked. That's because all three companies took them down (though you can still find archives of the Medium post). There was also an archive of the Github repository, but it has since been memory-holed as well.
Again... this raises a lot of questions. Github claimed that it removed the page for "violating community guidelines" -- specifically around "doxxing and harassment, and violating a third party's privacy." Medium claimed that the post violated rules against "doxxing" and specifically the "aggregation of publicly available information to target, shame, blackmail, harass, intimidate, threaten or endanger." Twitter, in Twitter's usual way, is not commenting. LinkedIn put out a statement saying: "We do not support or condone what immigration authorities are doing at the border, but we can’t allow the illegal use of our member data. We will take appropriate action to ensure our members’ data is protected and used properly."
Many people point out that all of this feels kind of ridiculous, seeing as this is all public info that the individuals chose to reveal about themselves on a public website. While Medium's expansive definition of doxxing makes things interesting by including an intent standard in releasing the info, even if it is publicly available, the whole thing, again, demonstrates how complex this is. I know that some people will claim that these are easy calls -- but, just for fun, try flipping the equation a bit. If you're anti-Trump, how would you feel if a prominent alt-right person compiled and posted your info -- even if publicly available -- on a site where alt-right folks gather, with the clear intent of having hoards of Trump trolls harassing you. Be careful the precedent you set.
If it were up to me, I think I would have come down differently than Medium, Github and Twitter in this case. My rationale: (1) all of this info was public information (2) that those individuals chose to place on a public website, knowing it was public (3) they are all employed by the federal government, meaning they are public servants and (4) while the compilation was done by someone who is clearly against the border policy, Lavigne never encouraged or suggested harassment of ICE agents. Instead, he wrote: "While I don’t have a precise idea of what should be done with this data set, I leave it here with the hope that researchers, journalists and activists will find it useful." He separately noted that he believed "it's important to document what's happening, and by whom." That seems to actually make a strong point in favor of leaving the data up, as there is value in documenting what's happening.
That said, reasonable people can disagree on this question (even if there should be no disagreement about how inhumane the policy at the border has been*) of what is the appropriate way for different platforms to handle these situations -- taking into account that this situation could play out with very different players in the future, and there is value in being consistent.
This is the very point that we were demonstrating with that game that we ran at COMO. Many people seem to think that content moderation decisions are easy: you just take down the content that is bad, and leave up the content that is good. But it's pretty rare that the content is easily classified in one of those categories. There is an enormous gray area -- and much of it involves nuance and context, which is not always easy to come by -- and which may look incredibly different depending on where you sit and what kind of world you think we live in. I still think there are strong arguments that the platforms should have left much of the content discussed in this post up, but I'm not the one making that call.
When we ran that game in DC last month, it was notable that on every single example we used -- even the ones we thought were "easy calls" -- there were some audience members who selected every option in the game. That is, there was not a single situation in our examples in which everyone agreed what should be done. Indeed, since there were four options, and all four were chosen by at least one person in every single example, it shows just how difficult it really is to make these calls. They are subjective. And what plays into that subjective decision making includes your own views, your own perspective, your own reading of the content and the rules -- and sometimes third party factors, including how people are reacting and what public pressure you're getting (in both directions). It is an impossible situation.
This is also why the various calls to mandate that platforms do this or face legal liability are even more ridiculous and dangerous. There are no "right" answers to these decisions. There are solutions that seem better to lots of people, but plenty of others will disagree. If you think you know the "right" way that all of these questions should be handled, I guarantee you're wrong, and if you were in charge of these platforms, you'd end up feeling just as conflicted as well.
This is why it's really time to start thinking about and talking about better solutions. Simply calling on platforms to be the final arbiters of what goes online and what stays offline is not a workable solution.
* Just a side note: if you are among the small minority of ethically-challenged individuals who gets upset that I describe the policy as inhumane: fuck off. The policy is inhumane and if you're defending it, you should seriously take time to re-evaluate your ethics and your life choices. On a separate note, if you are among the people who are then going to try to justify this policy as "but Obama/others did it too," the same applies. Whataboutism is no argument here. The policy is inhumane no matter who did it, and pointing out that others did it too doesn't change that. And, as inhumane as it may have been in the past, it has been severely ramped up. There is no defense for it. Attempting to defend it only serves to out yourself as a horrible person who has issues. Seriously: get help.
** This doesn't even fit anywhere in with this story, but scraping LinkedIn is (stupidly) incredibly dangerous. Linkedin has a history of suing people for scraping public info off of LinkedIn. And even if it's lost some of those cases, the company appears to take a pretty aggressive stance towards scrapers. We can argue about how ridiculous this is, but, dammit, this post is already too long talking about other stuff, so discuss it separately.
Filed Under: activism, content moderation, doxing, harassment, ice, internet platforms, phone numbers, stephen miller, takedowns
Companies: github, linkedin, medium, twitter