Facebook, Twitter Consistently Fail At Distinguishing Abuse From Calling Out Abuse

from the the-wrong-approach dept

Time and time again, we see that everyone who doesn't work in the field of trust and safety for an internet platform seems to think that it's somehow "easy" to filter out "bad" content and leave up "good" content. It's not. This doesn't mean that platforms shouldn't try to deal with the issue. They have perfectly good business reasons to want to limit people using their systems to abuse and harass and threaten other users. But when you demand that they be legally responsible -- as Germany (and then Russia) recently did -- bad things happen, and quite frequently those bad things happen to the victims of abuse or harassment or threats.

We just wrote about Twitter's big failure in suspending Popehat's account temporarily, after he posted a screenshot of a threat he'd received from a lawyer who's been acting like an internet tough guy for a few years now. In that case, the person who reviewed the tweet keyed in on the fact that Ken White had failed to redact the contact information from the guy threatening him -- which at the very least raises the question of whether or not Twitter considers threats of destroying someone's life to be less of an issue than revealing that guy's contact information, which was already publicly available via a variety of sources.

But, it's important to note that this is not an isolated case. In just the past few days, we've seen two other major examples of social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators. The first is the story of Francie Latour, as told in a recent Washington Post article, where she explains how she went on Facebook to vent about a man in a Boston grocery store loudly using the n-word to describe her and her two children, and Facebook's response was to ban her from Facebook.

But within 20 minutes, Facebook deleted her post, sending Latour a cursory message that her content had violated company standards. Only two friends had gotten the chance to voice their disbelief and outrage.

The second story comes from Ijeoma Oluo, who posted to Medium about a strikingly similar situation. In this case, she made what seems to me to be a perfectly innocuous joke about feeling nervous for her safety as a black woman in a place with many white people. But a bunch of rabid angry people online got mad at her about it and start sending all sorts of abusive tweets and hateful messages to her on Facebook. She actually says that Twitter was pretty good at responding to reports of abusive content. But, as in the Latour story, Facebook responded by banning Oluo for talking about the harassment she was receiving.

And finally, facebook decided to take action. What did they do? Did they suspend any of the people who threatened me? No. Did they take down Twitchy’s post that was sending hundreds of hate-filled commenters my way? No.

They suspended me for three days for posting screenshots of the abuse they have refused to do anything about.

That, of course, is a ridiculous response by Facebook. And Oluo is right to call them out on it, just as Latour and White were right to point out the absurdity of their situations.

But, unfortunately, the response of many people to this kind of thing is just "do better Facebook" or "do better Twitter." Or, in some cases, they even go so far as to argue that these companies should be legally mandated to take down some of the content. But this will backfire for the exact same reason that these ridiculous situations happened in the first place. When you run a platform and you need to make thousands or hundreds of thousands or millions of these kinds of decisions a day, you're going to make mistakes. And that's not because they're "bad" at this, it's just the nature of the beast. With that many decisions -- many of which involve people demanding immediate action -- there's no easy way to have someone drop in and figure out all of the context in the short period of time they have to make a decision.

On top of that, because this has to be done at scale, you can't have a team that is all skilled in understanding context and nuance and culture. Nor can you have people who can spend the necessary time to dig deeper to figure out and understand the context. Instead, you end up with a ruleset. And it has to be standardized so that non-experts are able to make judgments on this stuff in a relatively quick timeframe. That's why about a month ago, there was a kerfuffle when Facebook's "hate speech rule book" was leaked, and it showed how it could lead to situations where "white men" were going to be protected.

And when you throw into this equation the potential of legal liability, a la Germany (and what a large group of people are pushing for in the US), things will get much, much worse. That's because when there's legal liability on the line, companies will be much faster to delete/suspend/ban, just to avoid the liability. And many people calling for such things will be impacted themselves. None of the people in the stories above could have reasonably expected to get banned by these platforms. But, when people demand that platforms "take responsibility" that's what's going to happen.

Again, this is not in any way to suggest that online platforms should be a free for all. That would be ridiculous and counterproductive. It would lead to everything being overrun by spam, in addition abusive/harassing behavior. Instead, I think the real answer is that we need to stop putting the burden on platforms to make all the decisions, but figure out alternative ways. I've suggested in the past, that one possible solution is turning the tools around. Give end users much more granular control about how they can ban or block or silence content they don't want to see, rather than leaving it up to a crew of people who have to make snap decisions on who's at fault when people get angry online.

Of course, there are problems with my suggestion as well -- it could certainly accelerate the issues of self-contained bubbles of thought. And it could also result in plenty of incorrect blocking as well. But the larger point is that this isn't easy, and every single magic bullet solution has serious consequences, and often those consequences fall on the people who are facing the most abuse and harassment, rather than on those doing the abuse and harassment. So, yes, platforms need to do better. The three stories above are all ridiculous, and ended up harming people who were highlighting harassing behavior. But continuing to rely on platforms and teams of people to weed out content someone deems "bad" is not a workable solution, and it's one that will only lead to more of these kinds of stories.

And, worst of all, the abusers and harassers know and thrive on this. The guy who got Ken White's account banned gloated about it on Twitter. I'm sure the same was true of the folks who went after Oluo and likely "reported" her to Facebook. Any time you rely on the platform to be the arbiter, remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: abuse, free speech, harassment, intermediary liability, moderation, platforms, policing
Companies: facebook, twitter


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 9 Aug 2017 @ 10:48am

    From what I can tell, it's not even people who are doing the blocking/banning. It's a Bot. Send enough reports from enough sources, you can get almost any account shutdown for a little while.

    link to this | view in thread ]

  2. icon
    DanK (profile), 9 Aug 2017 @ 11:37am

    It's an incredibly complex problem

    I was discussing the Oluo situation with friends, and it is incredibly difficult to figure out how Facebook could have handled this properly. Put yourself in the shoes of a person tasked with reviewing content, and given only a few seconds per item to review. You are spending hours Per day, deciding "racist", "sex", "offensive", "acceptable"... Then Oluo's posts come up.

    You see the blatant racism. You see the death threats. You see the rape threats. Of course you'd mark it as offensive! The reviewers don't have the time to get the context that Oluo is posting to shame the original posters (exactly the same as the PopeHat situation). They just see the hateful messages and mark them bad.

    link to this | view in thread ]

  3. icon
    K`Tetch (profile), 9 Aug 2017 @ 11:45am

    Re: It's an incredibly complex problem

    Perhaps a tiered system then?

    First rank weeds out the obvious yes/no.
    then a second rank with more time to consider things better. Maybe a 3rd?

    Scan/look/consider in effect.

    link to this | view in thread ]

  4. icon
    K`Tetch (profile), 9 Aug 2017 @ 11:47am

    Re: Re: It's an incredibly complex problem

    and you can't always say everything's going quick.

    I turned in one report july 27th, I finally got a response august 7th.

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 9 Aug 2017 @ 11:54am

    What you understand doubleplusungood is that popehat is guilty of wrongthink and is thus an unprotected person. Facebook and twitter protections only apply to people, i.e. those who have goodthink. When your think is wrongthink, you're an unprotected, and if it's doublepluswrongthink, you can even be an unperson. Trust Big Brother.

    link to this | view in thread ]

  6. icon
    Michael Chermside (profile), 9 Aug 2017 @ 11:59am

    Legal System

    The legal systems of the world have evolved slowly over literally thousands of years -- with a great deal of cultural inertia but also managing to borrow ideas from each other and improve over time. Nearly all of them (from the Catholic Church's Canon law to the legal system in the US) incorporate a basic approach that works roughly like this:

    (1) both sides present their case

    (2) someone decides

    (3) there is an "appeals" process where one (or more) layers can review the decisions for fairness

    Maybe those companies (Facebook, Twitter, etc) trying to set up a review system should take inspiration from this deep historical source.

    link to this | view in thread ]

  7. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 9 Aug 2017 @ 12:02pm

    "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    HEY, MA SNICK: anyone gets buckets of crap for complaining about the harassment by fanboys here!

    **Just read through TODAY'S comments a couple pieces ago!**
    https://www.techdirt.com/articles/20170808/15450037961/techdirt-now-with-more-free-speech-repo rting.shtml

    **I say ME and the others who complained there EXACTLY fit the topic of this piece.** You have taken no action in the 8 years I've been complaining here, that those who use words like THIS are the problem, NOT those of us on-topic and civil:

    "There are white people, and then there are ignorant motherfuckers like you...."

    http://www.techdirt.com/articles/20110621/16071614792/misconceptions-free-abound-why-do-brai ns-stop-zero.shtml#c1869

    But of course YOU, Michael Ma snick, HIRE that person to re-write here! Explain that in light of this piece.

    So, Techdirt: the "community standard" that I always exceed is to NOT make completely unprovoked, racist-tinged, vile, insulting, vulgar, off-topic one-liners. -- Oh, and Geigner never apologized, but instead tried to dodge with classic abuser tactic of making a deal: he'll stop if I don't raise the topic again. Just read a couple after that link, then try to tell ME I'm a "troll". Phooey on you kids. You're uncivil, indecent, and liars.

    It's NOT how said, it's WHAT. YOU are banning viewpoints.

    ---
    13th attempt starting from 11 Pacific! This topic seems locked down with each comment approved, another hidden censorship tactic here.

    link to this | view in thread ]

  8. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 9 Aug 2017 @ 12:05pm

    Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    Let in on stale topic!

    Made me notice this: "remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves." -- They can ALSO use a "report" or "flag" to harass. There's only ONE reason that's done here, and it's to reduce impact of some comments. When a site continually colludes with a faction and never punishes comments such as the one I link to, it's not due any favorable regard, to say the least.

    link to this | view in thread ]

  9. identicon
    Thad, 9 Aug 2017 @ 12:08pm

    Re: Legal System

    And our legal system already lacks the staff to evaluate allegations quickly and fairly. That's *without* scaling it up to "anyone can accuse anyone of a violation, at no expense or personal risk."

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 9 Aug 2017 @ 12:10pm

    Re: Legal System

    the legal system in the US…

    The legal system in the U.S. has a basic approach that starts like this:

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 9 Aug 2017 @ 12:15pm

    Re:

    I too remember when I read 1984 in middle school.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 9 Aug 2017 @ 12:16pm

    Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    SovCit says what?

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 9 Aug 2017 @ 12:47pm

    "That's why about a month ago, there was a kerfuffle when Facebook's "hate speech rule book" was leaked, and it showed how it could lead to situations where "white men" were going to be protected."

    White men *are* protected by virtue of the laws preventing discrimination against anyone for their sex or race, amongst other attributes. Fuck anyone who thinks discrimination is hateful but it's ok so long as the victim is white and male.

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 9 Aug 2017 @ 12:49pm

    Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    "13th attempt starting from 11 Pacific! This topic seems locked down with each comment approved, another hidden censorship tactic here."

    lolwut? Paranoid much?

    link to this | view in thread ]

  15. identicon
    Anonymous Coward, 9 Aug 2017 @ 12:53pm

    Re: Re: Legal System

    Pay a filing fee…

    Is everyone here still really, firmly opposed to “business method” patents? Yeah, I kinda vaguely comprehend that there's arguably a millenium and more of soi-disant prior art here… But that's just arguable…   …right?

    Look, even after eBay, business method patents are still the law.

    And this one is ON A COMPUTER. WITH A SOCIAL NETWORK. FOR DISPUTE RESOLUTION.

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 9 Aug 2017 @ 1:00pm

    Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    jesus, you are unhinged or on some seriously potent drugs

    or maybe they're not potent enough based on your totally incomprehensible ranting

    link to this | view in thread ]

  17. identicon
    AricTheRed, 9 Aug 2017 @ 1:05pm

    Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    Out Of The Blue?

    Is that You?

    link to this | view in thread ]

  18. identicon
    Anonymous Coward, 9 Aug 2017 @ 1:56pm

    >Others have started using alternate spellings for “white people,” such as “wypipo,” “Y.P. Pull,” or “yt folkx” to evade being flagged by the platform activists have nicknamed “Racebook.”

    link to this | view in thread ]

  19. icon
    Stephen T. Stone (profile), 9 Aug 2017 @ 2:10pm

    Re: Re:

    Anything is possible.

    link to this | view in thread ]

  20. icon
    deadspatula (profile), 9 Aug 2017 @ 2:18pm

    Response to: White man are protected by Anonymous Coward on Aug 9th,2017 @ 12:47pm

    Interesting standpoint, that misses the context. Facebook prioritized (and may still do) protecting White males over any other gender of ethnic group. If there was a question who was wrong, the white guy was right. Always. That's the problem. They chose a discriminatory policy that prioritized a White males in an effort to speed up the process. Not that white ben where protected. That protecting white males was prioritized above protecting other groups.

    link to this | view in thread ]

  21. icon
    CharlesGrossman (profile), 9 Aug 2017 @ 2:49pm

    My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    Because I write under a pseudonym, my proposed solution is entirely hypocritical -- but I believe that non-anonymous accounts, which are verified to use the correct name of the real person, would solve 99% of this problem.

    link to this | view in thread ]

  22. identicon
    Will, 9 Aug 2017 @ 3:17pm

    Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    Everyone, please don't feed the trolls.

    link to this | view in thread ]

  23. identicon
    Thad, 9 Aug 2017 @ 3:23pm

    Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    So would magical unicorns, and a reliable system that can accurately determine whether or not someone is using a pseudonym, without significant false positives or false negatives or a staff the size of a mid-sized nation, is just as realistic.

    link to this | view in thread ]

  24. identicon
    Scardinius erythrophthalmus electron, 9 Aug 2017 @ 3:29pm

    Have they tried just using the necessary hashtags yet?????

    link to this | view in thread ]

  25. icon
    stderric (profile), 9 Aug 2017 @ 4:24pm

    Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    So would magical unicorns

    There used to be plenty, but they've all been grabbed up by the golden-crypto-key press gangs.

    link to this | view in thread ]

  26. icon
    btr1701 (profile), 9 Aug 2017 @ 4:26pm

    Protection

    > Facebook's "hate speech rule book" was leaked, and it showed how it
    > could lead to situations where "white men" were going to be protected.

    You say that as if white men don't deserve the same protection as everyone else.

    link to this | view in thread ]

  27. icon
    btr1701 (profile), 9 Aug 2017 @ 4:33pm

    Re:

    > Others have started using alternate spellings for “white people,” such as
    > “wypipo,” to evade being flagged by the platform

    In my experience, that isn't why that term is used. Every time I've seen it used on Twitter, it's been in the context of a racial slur for whites.

    link to this | view in thread ]

  28. identicon
    JJ, 9 Aug 2017 @ 4:39pm

    some "joke"

    Comments like that should be reserved for when there is a real threat. A huge number of people, me included, have lost all patience for racial "humor" targeting our identity.

    link to this | view in thread ]

  29. identicon
    Anonymous Coward, 9 Aug 2017 @ 5:42pm

    Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    It is. He's discarded his old pseudonym and branded it as a sort of martyr label after spending years of its reputation trolling it away.

    And now he thinks no one knows who he is despite the same troll tactics.

    link to this | view in thread ]

  30. identicon
    Anonymous Coward, 9 Aug 2017 @ 5:52pm

    Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    I don't see why. We haven't solved that problem irl either, and it's pretty much impossible to use a pseudonym there. Honestly, there are much more likely to be consequences doing it in person than doing it with your real name online, but that hasn't stopped anyone irl.

    Oh, a few might get tracked down, but anyone worried about that can just get an account under their real name just to post dumb stuff on and not include location information on it. After all, there are likely thousands of people with your name so it's not like the average racist(sexist, etc.) git could actually be tracked down by anyone.

    link to this | view in thread ]

  31. identicon
    Anonymous Troll, 9 Aug 2017 @ 7:05pm

    Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    This is a great idea - as a prolific troll, finding people's personal information online when I want to ruin their life can be a lot of work. By forcing everyone to make that information public already, you're dramatically reducing my workload and massively increasing not just the number of people I can harass, but the ease with which I can do it.

    link to this | view in thread ]

  32. icon
    Toom1275 (profile), 9 Aug 2017 @ 7:10pm

    Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    Try resubmitting after an hour, not after less than a minute. Worked for me.

    Repeatedly posting the same comment over again is the hallmark of either a spambot, or somebody with the patience of one.

    link to this | view in thread ]

  33. icon
    Mike Masnick (profile), 9 Aug 2017 @ 11:39pm

    Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    Because I write under a pseudonym, my proposed solution is entirely hypocritical -- but I believe that non-anonymous accounts, which are verified to use the correct name of the real person, would solve 99% of this problem.

    We've actually discussed this before, and it's not true for a variety of reasons. First, Facebook already requires real names and there's a ton of abuse there. Second, multiple studies on the topic have shown the "abuse" levels between anonymous and real names is really no different. Third, being anonymous has tremendous benefits that shouldn't be tossed out just because some people abuse it.

    link to this | view in thread ]

  34. icon
    PaulT (profile), 10 Aug 2017 @ 12:58am

    Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    "First, Facebook already requires real names and there's a ton of abuse there"

    Well, they *say* they do, but I don't think it's ever been enforced except in cases where they're using it as a reason to kick people off after abuse has happened. Unless something's changed recently, I don't believe they're ever pre-vetted anyone.

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 10 Aug 2017 @ 2:11am

    Re: Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    Pre-vetting seems like it's awfully time-consuming. As far as I remember they do run an automated plausibility check on the given user name when you register. Don't really know, or maybe they have changed it?

    link to this | view in thread ]

  36. icon
    PaulT (profile), 10 Aug 2017 @ 2:28am

    Re: Re: Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only

    Well, that's kind of my point.

    In my Facebook feed I have as "friends" 3 dogs, one building, a number of completely made up people (test accounts from when I used to work for a company that produced Facebook games) and couple of businesses (from before Facebook introduced pages). There's also a few people I know using very obvious pseudonyms, and they've never had any issues despite being regular users. In fact, all of these accounts are still active despite them obviously not relating to a real name.

    So, since Facebook really don't do any active vetting of whether people are using their real names, it doesn't seem right to say that Facebook are already forcing people not to be anonymous when using it.

    link to this | view in thread ]

  37. icon
    Ninja (profile), 10 Aug 2017 @ 5:14am

    Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."

    "or somebody with the patience of one."

    lol at that.

    link to this | view in thread ]

  38. icon
    Bergman (profile), 10 Aug 2017 @ 6:25am

    Re: It's an incredibly complex problem

    There's a flaw in your reasoning, Dan.

    All of the posts that Oluo's account was banned for are posts that were reported to Facebook as being abusive...and Facebook declined to take action because they didn't violate any rules.

    So Oluo posted the abusive comments that, according to Facebook, didn't break any rules. And was banned for breaking the rules against threatening and abusing people.

    Do you see the problem yet?

    link to this | view in thread ]

  39. icon
    Bergman (profile), 10 Aug 2017 @ 6:29am

    Re: Response to: White man are protected by Anonymous Coward on Aug 9th,2017 @ 12:47pm

    No, the problem is that you don't get it.

    Facebook didn't prioritize protecting white men. They prioritized protecting those who were targeted for two or more categories that Facebook was watching for.

    Black men get equal protection with white men or asian women under that system. White drivers get no protection, the same way black drivers don't and women drivers don't -- but black women drivers do get protected.

    White men got protected because gender and race are two protected categories under their system. Quit being so racist, it's not about white people.

    link to this | view in thread ]

  40. icon
    Bergman (profile), 10 Aug 2017 @ 6:37am

    Re: Protection

    For thousands of years, pretty much all of human history, there have been categories of people who it is okay to discriminate against. It's socially acceptable to hate them and even seen as right-thinking and moral to do so.

    Pick any race you care to name, any religion, any wealth level, any gender, and you will find that they have been the target of bigotry, etc. Jews, blacks, asians, native Americans, none of them are unique in this.

    White people have never been exempt, you can find quite a few places around the world where being white gets you abused and discriminated against.

    And in our Western societies, the Social Justice Warriors have decided that being white makes you exempt from having human rights, or deserving to be treated fairly. If any other race is proud of their heritage, it's good and pure -- but heaven help the white kid who is proud of his heritage, because he will be told that being proud of his heritage makes him evil.

    link to this | view in thread ]

  41. icon
    PaulT (profile), 10 Aug 2017 @ 7:00am

    Re: Re: Protection

    I was with you until your last paragraph, largely because it's full of crap. I've never felt discriminated against, but I sure as hell share a race and gender with some ranting assholes who can't accept that they can't treat everyone else as inferior any more and have to hold back on their abuse of them. Nobody's ever removed a human right from me because I'm white, no matter what you claim.

    "he will be told that being proud of his heritage makes him evil."

    Define "proud of his heritage". There might be something in the definition which gives you a clue. Generally speaking, there's nothing wrong with being proud of your heritage, but there does seem to be a correlation between certain type of "pride" and white nationalism - that correlation might be something you're inadvertently referencing.

    For example - being an Englishman, there's generally nothing wrong with people being proud to be English. However, the white nationalists have tended to throw around the St George flag as a symbol of their violent racial hatred, and this has led to it being tarnished somewhat as a symbol. I've never seen anyone being told that they can't be proud to be English/British, but it does tend to send a certain type of message if a person chooses the St George flag instead of the Union Flag to broadcast that.

    It's a shame, but the reason it's objectionable to some is not because people are being told they can't be "proud of their heritage". It's because people flying that flag have beaten and murdered people in its name.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.