Now It's Harvard Business Review Getting Section 230 Very, Very Wrong

from the c'mon-guys dept

It would be nice if we could have just a single week where some major "respected" publication could do the slightest bit of fact checking on their wacky articles on Section 230. It turns out that's not happening this week. Harvard Business Review has now posted an article saying It's Time to Update Section 230 written by two professors -- Michael Smith of Carnegie Mellon and Marshall Van Alstyne at Boston University. For what it's worth, I've actually been impressed with the work and research of both of these professors in the past -- even though Smith runs a program funded by the MPAA, that publishes studies about the internet and piracy, his work has usually been careful and thorough. Van Alstyne, on the other hand, has published some great work on problems with intellectual property, and kindly came and spoke at an event we helped to run.

Unfortunately, this piece for HBR does not do either Smith or Von Alstyne any favors -- mainly because it just gets so much wrong. It starts out, like so many of these pieces, with some mythmaking, that Section 230 was passed due to "naive" techno-optimism. This is just simply wrong, even if it sounds like a good story. It then (at least) does highlight some of the good that social media has created (Arab Spring, #MeToo, #BlackLivesMatter, and the ice bucket challenge). But then, of course, it pivots to all the "bad" stuff on the internet, and says that "Section 230 didn't anticipate" how to deal with that.

So, let's cut in and point out this is wrong. Section 230's authors have made it abundantly clear over and over again that they absolutely did anticipate this very question. Indeed, the very history of Section 230 is the history of web platforms trying to figure out how to deal with the ever-changing, ever-evolving challenge of "bad" stuff online. And the way that 230 does that is by allowing websites to constantly experiment, innovate, and adapt without fear of liability. Without that, you create a much worse situation -- one in which any "false" move by the website could lead to liability and ridiculously costly litigation. Section 230 has enabled a wide variety of experiments and innovations in content moderation to figure out how to keep platforms functioning for users, advertisers, and more. But, this article ignores all that and pretends otherwise. That's doing a total disservice to readers, and presenting a false narrative.

The article goes through a basic recap of how Section 230 works -- and concludes:

These provisions are good — except for the parts that are bad.

Amusingly, that argument applies to lots of content moderation questions as well. Keep all the stuff that's good, except for the parts that are bad. And it's that very point that highlights why Section 230 is actually so important. Figuring out what's "good" and what's "bad" is inherently subjective, and that's part of the genius of Section 230, is that it allows companies to experiment with different alternatives in figuring out how best to deal with things for their own community, rather than trying to comply with some impossible standard.

They then admit that there are other, non-legal, incentives that have helped keep websites moderating in a reasonable way, though they imply that this doesn't work any more (they don't explain why or how):

When you grant platforms complete legal immunity for the content that their users post, you also reduce their incentives to proactively remove content causing social harm. Back in 1996, that didn’t seem to matter much: Even if social media platforms had minimal legal incentives to police their platform from harmful content, it seemed logical that they would do so out of economic self-interest, to protect their valuable brands.

Either way, from there, there article goes completely off the rails in ways that are kind of embarrassing for two widely known professors. For example, the following statement is entirely unsupported. It is disconnected from reality. Hilariously, it is the very "misinformation" that these two professors seem so upset about.

We’ve also learned that platforms don’t have strong enough incentives to protect their brands by policing their platforms. Indeed, we’ve discovered that providing socially harmful content can be economically valuable to platform owners while posing relatively little economic harm to their public image or brand name.

I know that this is out there in the air as part of the common narrative, but it's bullshit. Pretty much every company of any size lives in fear of stories of "bad" content getting through on their platform, and causing some real world harm. It's why companies have invested so much in hiring thousands of moderators, and trying to find any kind of technological solution that will help in combination with the ever growing ranks of human moderators (many of whom end up being traumatized by having to view so much "bad" content). The idea that Facebook's business isn't harmed by its failures on this front or that the "socially harmful content" is "valuable" to Facebook is simply not supported by reality. There are huge teams of people within Facebook pushing back against that entire narrative. Facebook also didn't set up the massive (and massively expensive) Oversight Board out of the goodness of its heart.

What Smith and Van Alstyne apparently fail to consider is that this is not a problem of Facebook not having the right incentives. It's a problem of it being impossible to do this well at scale, no matter what incentives are in place, combined with the fact that many of the "problems" they're upset about actually being societal problems that governments are blaming on social media to hide their own failings in fixing education, social safety nets, criminal justice reform, healthcare, and more.

This paragraph just kills me:

Today there is a growing consensus that we need to update Section 230. Facebook’s Mark Zuckerberg even told Congress that it “may make sense for there to be liability for some of the content,” and that Facebook “would benefit from clearer guidance from elected officials.” Elected officials, on both sides of the aisle, seem to agree: As a candidate, Joe Biden told the New York Times that Section 230 should be “revoked, immediately,” and Senator Lindsey Graham (R-SC) has said, “Section 230 as it exists today has got to give.” In an interview with NPR, the former Congressmen Christopher Cox (R-CA), a co-author of Section 230, has called for rewriting Section 230, because “the original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things.”

First off, Facebook is embracing reforms to Section 230 because it can deal with them and it knows the upstart competitors it faces cannot. This is not a reason to support 230 reform. It's a reason to be very, very worried about it. And yes, there is bipartisan anger at 230, but they leave out that it's for the exact opposite reasons. Democrats are mad that social media doesn't take down more constitutionally protected speech. Republicans are mad that websites are removing constitutionally protected conspiracy theories and nonsense. The paragraph in HBR implies, incorrectly, that there's some agreement.

As for the Cox quote, incredibly, this was taken from a few years ago, in which Cox appeared to have a single reform suggestion: clarifying that the definition of an Information Content Provider covers companies that are actively involved in unlawful activity done by users. And, notably (again, skipped over by Smith and Van Alstyne) that interview occurred just after FOSTA was passed by Congress -- and now it's widely recognized how FOSTA has a complete disaster for the internet, and has put tons of people in harm's way. That seems kinda relevant if we're talking about how to update the law again.

But Smith and Van Alstyne don't even mention it!

Instead, the fall back on tired, wrong, or debunked arguments.

Legal scholars have put forward a variety of proposals, almost all of which adopt a carrot-and-stick approach, by tying a platform’s safe-harbor protections to its use of reasonable content-moderation policies. A representative example appeared in 2017, in a Fordham Law Review article by Danielle Citron and Benjamin Wittes, who argued that Section 230 should be revised with the following (highlighted) changes: “No provider or user of an interactive computer service

that takes reasonable steps to address known unlawful uses of its services that create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.”

Of course, as we've explained, this is a solution that only a law professor who has never had to run an actual website could love. The problems with the "takes reasonable steps" argument are myriad. For one, it would mean that websites would constantly need to go to court to defend their content moderation practices -- a costly and ridiculous experience, especially when you have to defend it to people who don't understand the intricacies and trade-offs of content moderation. I saw this first hand just a couple months ago, in watching a print-on-demand website lose a court fight, because the plaintiff insisted that any mistake in its content moderation practices proved its efforts weren't "reasonable."

At best such a setup would mean that all content moderation would become standardized, following exactly whatever plan was chosen by the first few companies to win such lawsuits. You'd wipe out pretty much any attempt at creating new, better, more innovative content moderation solutions, because the only way you could do that is if you were willing to spend a million dollars defending it in court. And that would mean that the biggest companies (once again) would control everything. Facebook could likely win such a case, screwing over tons of competitors, and then everyone else would have to adopt Facebook's model (hell, I wouldn't put it past Facebook to offer to "rent" its content moderation system out to others) in such a world. The rich get richer. The powerful get more powerful. And everyone else gets screwed.

The duty-of-care standard is a good one, and the courts are moving toward it by holding social media platforms responsible for how their sites are designed and implemented. Following any reasonable duty-of-care standard, Facebook should have known it needed to take stronger steps against user-generated content advocating the violent overthrow of the government.

This is also garbage and taken entirely out of context. It doesn't mention just how much content there is to moderate. Facebook has billions of users, posting tons of stuff every day online. This supposes that Facebook can automatically determine "content advocating the violent overthrow of the government." But it does nothing whatsoever to help define what that content actually looks like, or how to find it, or how to explain those rules to every content moderator around the globe in a manner in which they'll treat content in a fair and equitable way. It doesn't take into account context. Is it "advocating the violent overthrow of the government" when someone tells a joke hoping President Trump dies? Is it failing a duty of care standard for someone to suggest that... an authoritarian dictatorship should be overthrown? There are so many variables, and so many issues here that to just toss out the idea that it's obvious a duty of care was not taken to allow for "content advocating the violent overthrow of a government" that is just shows how ridiculously naive and ignorant both Smith and Van Alstyne are about the actual issues, trade-offs, and challenges of content moderation.

They then try to address these kinds of arguments by setting up a very misleading strawman to knock down:

Not everybody believes in the need for reform. Some defenders of Section 230 argue that as currently written it enables innovation, because startups and other small businesses might not have sufficient resources to protect their sites with the same level of care that, say, Google can. But the duty-of-care standard would address this concern, because what is considered “reasonable” protection for a billion-dollar corporation will naturally be very different from what is considered reasonable for a small startup.

Yeah, but you only find that out after you're dead, spending a million dollars defending it in court.

And then... things go from just bad and informed, to actively spreading misinformation:

Another critique of Section 230 reform is that it will stifle free speech. But that’s simply not true: All of the duty-of-care proposals on the table today address content that is not protected by the First Amendment. There are no First Amendment protections for speech that induces harm (yelling “fire” in a crowded theater), encourages illegal activity (advocating for the violent overthrow of the government), or that propagates certain types of obscenity (child sex-abuse material).

Yes, that's right. They trotted out the fire in a crowded theater trope, which already is wrong, and then they apply it incorrectly. It's flat out wrong to say that there is no 1st Amendment protection in speech that induce harm. Much such content is absolutely protected under the 1st Amendment. The actual exceptions to the 1st Amendment (which, you know, maybe someone at HBR should have looked up) in this area are for "incitement to imminent violence" or "fighting words," both of which are very, very, very narrowly defined.

As for child sex-abuse material, that's got nothing to do with Section 230. CSAM content already violates federal criminal law and Section 230 has always exempted federal criminal law.

In other words, this paragraph is straight up misinformation. The very kind of misinformation that Smith and Van Alstyne seem to think websites should be liable for hosting.

Technology firms should embrace this change. As social and commercial interaction increasingly move online, social-media platforms’ low incentives to curb harm are reducing public trust, making it harder for society to benefit from these services, and harder for legitimate online businesses to profit from providing them.

This is, again, totally ignorant. They have embraced this change, because the incentives already exist. It's why every major website has a "trust & safety" department that hires tons of people and does everything they can to properly moderate their websites. Because getting it wrong leads to tons of criticism from users, from the media, and from politicians -- not to mention advertisers and customers.

Most legitimate platforms have little to fear from a restoration of the duty of care.

So long as you can afford the time, resources, and attention required to handle a massive trial to determine if you met the "duty of care." So long as you can do that. And, I mean, it's not like we don't have examples of how this plays out in other arenas. I already talked about what I saw in court this summer in the trademark field (not covered by Section 230). And we have similar examples of what happens in the copyright space as well (not covered by Section 230). Perhaps Smith and Van Alstyne should go talk to the CEO of Veoh... oh wait, they can't, because the company is dead, even though it won its lawsuit on this very issue a decade ago.

A duty of care standard only makes sense if you have no clue how any of this works in practice. It's an academic solution that has no connection to reality.

Most online businesses also act responsibly, and so long as they exercise a reasonable duty of care, they are unlikely to face a risk of litigation.

I mean, this is just completely disconnected from reality as we've seen. That trial I witnessed in June is one of multiple cases brought by the same law firm against online marketplace providers, more or less trying to set up a business suing companies for failing to moderate trademark-related content to some arbitrary standard.

What good actors have to gain is a clearer delineation between their services and those of bad actors.

They already have that.

A duty of care standard will only hold accountable those who fail to meet the duty.

Except for all the companies it kills in litigation.

This article is embarrassingly bad. HBR, at the very least, should never have allowed the blatantly false information about how the 1st Amendment works, though all that really serves to do is discredit both Smith and Van Alstyne.

I don't understand what makes otherwise reasonable people who clearly have zero experience with the complexities of social media content moderation to assume they've found the magic solution. There isn't a magic solution. And your solution will make things worse. Pretty much all of them do.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: 1st amendment, content moderation, duty of care, fire in a crowded theater, incentives, marshall van alstyne, michael smith, section 230


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    That One Guy (profile), 13 Aug 2021 @ 10:02am

    One day we shall get to two digits...

    Well, time to reset the 'Days since supposed legally experienced person makes wildly incorrect and/or dishonest statements about 230' timer back to zero I see.

    I've said it before and I'll say it again, if 230 really was this terrible law causing all this harm then you'd think it would be easy to present an honest argument against it, and yet to date none have appeared even when people who really should know better decide to jump on the 'let's attack 230' bandwagon.

    link to this | view in thread ]

  2. identicon
    mickey, 13 Aug 2021 @ 10:09am

    error

    I went to the Harvard Business Review website to read the article but got a 411 error.

    link to this | view in thread ]

  3. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 13 Aug 2021 @ 10:27am

    Remember, Section 230 has given websites vast chance to innovate and the freedom to properly moderate. This is why sites like KiwiFarms don’t exist anymore, where the site’s owner supports and participates in targeted harassment campaigns that its users start that have led to their victims committing suicide.

    Oh, wait…

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 13 Aug 2021 @ 10:29am

    Automatic Gatekeeping

    Anyone who wants to proclaim that FaceTwitGramApp need to "do more" must go through Content Moderator training and spend two full weeks on the front line.

    Then and only then, will I listen to you blather about how obvious it is what they should be doing.

    link to this | view in thread ]

  5. icon
    Toom1275 (profile), 13 Aug 2021 @ 10:37am

    Arguments against Section 230 that don't lie about it:

    0

    link to this | view in thread ]

  6. icon
    That One Guy (profile), 13 Aug 2021 @ 10:39am

    Re:

    I love how you used an example of something that wouldn't be covered under 230 in order to attack 230, well done on shooting your own argument in the back right at the starting line.

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 13 Aug 2021 @ 10:42am

    It would be interesting to see how many of these expert professors misreading the law are tenured (you know, the thing that prevents you for being fired for dragging your profession and institution into the mud) vs. non-teneured academics... One allows you to have your incorrect opinions bought off and never questioned by your employer, the other is an honest job where you're accountable for your statements

    link to this | view in thread ]

  8. This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 13 Aug 2021 @ 10:46am

    Additional Appreciation

    As for child sex-abuse material, that's got nothing to do with Section 230. CSAM content already violates federal criminal law and Section 230 has always exempted federal criminal law.

    I appreciate you pointing out that CSAM is not an opinion, but is a criminal activity, and isn't even something protected by Section 230 or the 1st amendment.

    -Getting censored proves that your opinion is the strongest.

    link to this | view in thread ]

  9. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 13 Aug 2021 @ 10:48am

    Re: Re:

    KiwiFarms has been around and doing shit that’s not protected under Section 230 for years. They’re a well-known site with a blood-stained reputation that precedes them. They’ve faced zero tangible consequences. This points to something about our current legal framework being vastly broken.

    link to this | view in thread ]

  10. icon
    Samuel Abram (profile), 13 Aug 2021 @ 10:53am

    Re: Re: Re:

    The pirate bay (and other piracy web sites) still exist, yet section 230 exempts IP law from its protections. If you're mad about something that's already illegal, don't blame section 230 for protecting it when it clearly isn't doing it.

    link to this | view in thread ]

  11. icon
    That One Guy (profile), 13 Aug 2021 @ 10:56am

    Re: Re: Re:

    That may be but that 'something' is not 230 and it's incorrect if not outright dishonest to imply that it is.

    link to this | view in thread ]

  12. icon
    Samuel Abram (profile), 13 Aug 2021 @ 10:57am

    Re: Additional Appreciation

    Here's the difference though: CSAM is banned by the government. Expressing Your Nazi opinions are not, though I can kick you out of my house if you were to express them there.

    link to this | view in thread ]

  13. icon
    techflaws (profile), 13 Aug 2021 @ 10:57am

    Re:

    Oh, wait…

    Oh, wait indeed, genius.

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 13 Aug 2021 @ 11:09am

    Re: Additional Appreciation

    Just because you believe the election was stolen doesn't make you belief true.

    link to this | view in thread ]

  15. icon
    James Burkhardt (profile), 13 Aug 2021 @ 11:38am

    Re: Additional Appreciation

    I appreciate you pointing out that CSAM is not an opinion, but is a criminal activity, and isn't even something protected by Section 230 or the 1st amendment.

    You say that as if Techdirt hasn't made the point repeatedly that federal criminal crimes committed by the owner of a website are not protected by section 230 and therefore section 230 provides no protection against crimes committed by the owner of a website. What was your point?

    -Getting censored proves that your opinion is the strongest.

    By your definition of censor, People love to censor the opinion that pedophillia is fine. Guess that is the best opinion?

    link to this | view in thread ]

  16. icon
    ECA (profile), 13 Aug 2021 @ 11:42am

    Corp vs corp

    Lawyers are supposed to help those that wish to pursue legal things.
    This is More as a corp vs corp thing in the long run.
    But for some strange reason our Gov. Thinks(or is being paid) to fight this battle.

    Its strange that after giving Corps Human rights(it didnt happen that long ago), we stopped regulating them. We are letting them run wild.
    Who is trying to resend a law that All corps already have? LLC is proof of that. Where even the Owners and CEO of a corp are not responsible. But there are ways to TAKE a corp from another. You can Sue the other corp and force the owners to quit. Blackmail by court.

    The internet forums and Chats have been asked to Curtail Hate speech and a few other things, By many of the governments. But the Sites also try to protect themselves from Legal disputes with the OTHER CORPS. Kim Dotcom(?) got in the middle of all of this and found out the hard way, DONT DEAL with the corps.
    With all of this, and Fosta, 'its for the Children', and the Key Name calling (communism and socialism) just to CLOUD what is happening.
    It comes down to the Old rich want Some of what the NEW rich have. The Bill collectors want what has been Built by others, only to have more bills to collect, and NEVER develop anything else.

    link to this | view in thread ]

  17. This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 13 Aug 2021 @ 11:44am

    Re: Re: Additional Appreciation

    Fear not. Death threats from nazis are also not political opinions, and are not covered as protected speech either. I'm sure Maz will get around to explaining this in one of his articles in the near future.

    -Getting censored proves that your opinion is the strongest.

    link to this | view in thread ]

  18. icon
    sumgai (profile), 13 Aug 2021 @ 11:52am

    These provisions are good — except for the parts that are bad.

    "Amusingly, that argument applies to lots of OTHER LAWS as well."

    T,FTFY

    link to this | view in thread ]

  19. icon
    Derek Kerton (profile), 13 Aug 2021 @ 12:16pm

    Try Harder...Charles Harder

    "Yeah, but you only find that out after your dead, spending a million dollars defending it in court."

    Oh, come on, Mike. Quit being so dramatic. What do you know about a small company facing death because of some frivolous lawsuit trying to stifle the websites right to free speech by ruining it with legal costs and distraction?

    Could never happen. The Law and the Courts are perfect, and could never by abused in such a way.

    Try Harder. I'm Gawking at your Hulking Hoagie hyperlinks in Teal. This is like getting a Shiv-a prison knife- SLAPPed across your genuine articles.

    link to this | view in thread ]

  20. icon
    Stephen T. Stone (profile), 13 Aug 2021 @ 12:39pm

    Getting censored proves that your opinion is the strongest.

    Does this maxim apply to Critical Race Theory, Pride flags, and any other speech or expression conservatives have sought to ban over the years? Or does it only apply to conservative views (you know the ones)?

    link to this | view in thread ]

  21. icon
    That One Guy (profile), 13 Aug 2021 @ 12:56pm

    Keepy lying to yourself if that's what it takes

    -Getting censored proves that your opinion is the strongest.

    -Repeatedly lying about being 'censored' because people keep showing you the door of their private property proves that you're not just a person no-one wants to be around but a dishonest one who refuses to own their own words and deeds and instead blames others.

    link to this | view in thread ]

  22. icon
    That One Guy (profile), 13 Aug 2021 @ 1:00pm

    Re:

    I do so love how even after people have shoved their face in how stupid their little throwaway line is and how it's led to them supporting terrorists organizations(among other things) they still trot it out, I guess Koby can be lumped in with Woody as someone who just loves being publicly humiliated.

    link to this | view in thread ]

  23. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:27pm

    This sounds like the political arguments against poverty: they can just work harder because they're not being incentived to work.

    Its an argument made by people who have never experienced it, who don't think the government should be helping out, and that all attempts (no matter what the existing evidence says) have made the problem worse.

    link to this | view in thread ]

  24. This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 13 Aug 2021 @ 1:29pm

    Re: Re:

    And death threats from terrorist organizations are also not political opinions, therefore making it not protected speech. The tagline stands strong for as long as it remains an accurate predictor of censorship behavior.

    -Getting censored proves that your opinion is the strongest.

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:42pm

    Re: Re: Re:

    That would fall under what passes for harassment laws in the US or whereever the Kiwifarms servers are located.

    Conflating the right to associate to harassment is more than just downright dishonest, it's just plain wrong.

    link to this | view in thread ]

  26. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:43pm

    Re: Automatic Gatekeeping

    3 months minimum, starting with modering an IRC chatroom.

    If they still persist after that...

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:47pm

    Re: Re: Re:

    Good thing none of us are the government then, since 1A expressly denies the government to censor your vile opinions.

    Good to know that Critical Race Theory is still one of the strongest opinions, though. Your FBI handler should get a commendation for forcing you to use that tagline.

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:50pm

    Re:

    That depends on who's actually paying them.

    If it's the university, check who's been disbursing donations to the university. If not, check who's actually paying them to write disinfo and propaganda.

    link to this | view in thread ]

  29. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:53pm

    Re: Try Harder...Charles Harder

    Oh, he knows plenty.

    Maybe you should check to see who funded his "COVID and Tech" Section. It's someone who fucking shares your opinions.

    link to this | view in thread ]

  30. identicon
    Anonymous Coward, 13 Aug 2021 @ 1:56pm

    • "These provisions are good — except for the parts that are bad."

    Amusingly, that argument applies to lots of content moderation questions as well.*

    Amusingly, that argument applies to bloody everything.

    When you grant platforms complete legal immunity for the content that their users post,

    Yeah, about that: No, that's the First Amendment.

    We’ve also learned that platforms don’t have strong enough incentives to protect their brands by policing their platforms. Indeed, we’ve discovered that providing socially harmful content can be economically valuable to platform owners while posing relatively little economic harm to their public image or brand name.

    Lol, wait to you get the misinformation tag applied to your ranting. Maybe you should be shadowbanned or suspended?

    What Smith and Van Alstyne apparently fail to consider is...
    the world is full of people who are the same as they've always been, and they use these internet communications platforms.

    So, who's at fault for failing to moderate reality for the last 10 ky or so?

    link to this | view in thread ]

  31. icon
    Stephen T. Stone (profile), 13 Aug 2021 @ 2:02pm

    death threats from terrorist organizations are also not political opinions

    Expressions of their political ideologies are, though. And when those are censored, the logic of your little pissant maxim says their opinions immediately become the strongest.

    Why do you support terrorist ideologies, Koby?

    link to this | view in thread ]

  32. icon
    That One Guy (profile), 13 Aug 2021 @ 2:08pm

    By all means keep digging

    Cribbing from Stephen here... And if that's the only thing ISIS posted you might have a point but it's isn't so you don't, leaving you right back to cheering on ISIS, critical race theory, homosexual and trans rights(and on the other side of the aisle bigots of all stripes though I doubt you have a problem with them), and a whole slew of other things.

    -Repeatedly lying about being 'censored' because people keep showing you the door of their private property proves that you're not just a person no-one wants to be around but a dishonest one who refuses to own their own words and deeds and instead blames others.

    link to this | view in thread ]

  33. identicon
    Anonymous Coward, 13 Aug 2021 @ 2:13pm

    Re: Re: Re: Additional Appreciation

    Why do you continue to advocate for pedophiles and terrorists?

    link to this | view in thread ]

  34. identicon
    Anonymous Coward, 13 Aug 2021 @ 5:35pm

    Re: Re: Re: Re:

    Your Russian handler should get a commendation for forcing you to use that tagline.

    FTFY.

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 13 Aug 2021 @ 5:42pm

    Re: Additional Appreciation

    Getting censored proves that your opinion is the strongest.

    Why do you refuse to tell us what opinions are being censored? I constantly ask you to tell us what conservative opinions are being moderated on social media, but you REFUSE to answer.

    Basically, that tells me one of two things:

    You are full of shit and are getting paid to troll,

    OR - since you refuse to answer,

    The alternative, and more likely scenario, is that you will not admit that you are a Nazi, racist, homophobic, bigoted, xenophobic asshole and are constantly pissed that people keep kicking your ass out of their social media platforms and you feel that you have the strongest opinions that are being censored.

    So what is it Koby, tell us what conservative opinions are being censored, or admit that you are a Nazi racist asshole who is into kiddie porn.

    link to this | view in thread ]

  36. identicon
    Anonymous Coward, 13 Aug 2021 @ 6:34pm

    Re: Re: Re:

    You keep making these attempts, John Smith, and they're just as weaksauce as your Herrick gambit all those years ago. It's just barely a step up from that press release and police investigation that you keep swearing up and down is like totes going to happen even before the pandemic.

    link to this | view in thread ]

  37. icon
    Toom1275 (profile), 13 Aug 2021 @ 8:04pm

    Re: Re: Re:

    The tagline stands strong for as long as it remains an accurate predictor of censorship behavior.

    Translation: Koby's tagline is therefore weak.

    link to this | view in thread ]

  38. icon
    Toom1275 (profile), 13 Aug 2021 @ 8:07pm

    Re: Re: Additional Appreciation

    Why do you refuse to tell us what opinions are being censored?

    Koby's the only one here demanding the censorship of speech he doesn't like ("We don't tolerate Nazis here" etc.), and his baseless argumebts are the weakest in the entire thread, so...

    link to this | view in thread ]

  39. identicon
    Anonymous Coward, 14 Aug 2021 @ 8:42am

    Re: Re: Re: Re: Re:

    Oh snap.

    link to this | view in thread ]

  40. icon
    Darkness Of Course (profile), 14 Aug 2021 @ 7:37pm

    Fire in the theater

    The Atlantic has a reasonable one here

    https://www.theatlantic.com/national/archive/2012/11/its-time-to-stop-using-the-fire-in-a-crowd ed-theater-quote/264449/

    Which references a Popehat discourse on why the phrase is not only wrong re 1st Amend rights, but definitely wrong as Holmes was all about censoring, not freedom of speech

    https://www.popehat.com/2012/09/19/three-generations-of-a-hackneyed-apologia-for-censorship-a re-enough/

    link to this | view in thread ]

  41. identicon
    Anonymous Coward, 15 Aug 2021 @ 6:08am

    Re: Re: Try Harder...Charles Harder

    Woosh.

    link to this | view in thread ]

  42. identicon
    Anonymous Coward, 15 Aug 2021 @ 10:23am

    Re: Keepy lying to yourself if that's what it takes

    I guess repeatedly being censored, shit on, and abused for centuries make Black and First Nations peoples the best, stronest peoples ever, in the American context at least.

    Perhaps they should be in charge of this hemisphere.

    link to this | view in thread ]

  43. icon
    That One Guy (profile), 15 Aug 2021 @ 1:39pm

    Re: Re: Keepy lying to yourself if that's what it takes

    Yeah, it's nice of Koby to, when not supporting ISIS, make clear that they wholeheartedly support and believe in the superiority of various minority groups and/or non-white races.

    Here I'd been thinking that they were cheering on the scum of the internet, the trolls and bigots of all flavors, when it turns out that in fact they were/are huge fans of the superiority of non-heterosexuality and non-white races and were just too shy to say so out loud.

    link to this | view in thread ]

  44. identicon
    Anonymous Coward, 15 Aug 2021 @ 6:07pm

    Re: Every accusation is a confession

    Who did you harass into suicide Jhon?

    link to this | view in thread ]

  45. identicon
    Anonymous Coward, 15 Aug 2021 @ 6:09pm

    Re: Re: Re:

    So what did you do that got doxxed there bro?

    link to this | view in thread ]

  46. identicon
    Anonymous Coward, 15 Aug 2021 @ 6:11pm

    Re: Additional Appreciation

    So how goes the child pornographer and terrorist apology business?

    link to this | view in thread ]

  47. identicon
    Anonymous Coward, 15 Aug 2021 @ 6:16pm

    Re: Re:

    It’s been obvious for a while that Kboi, Blue Balls and, Jhon boi both get off on being repeatedly publicly humiliated. Though Blue Balls does like to use stupid tag lines and “Maz” so he maybe judged outed himself again.

    link to this | view in thread ]

  48. identicon
    Anonymous Coward, 15 Aug 2021 @ 7:43pm

    Re: Re: Re: Re:

    John's only out these days is to try his best to find obscure places that try to out-4chan 4chan. That he has to go out of his way to locate these assholes is... rather telling why he knows these things like the back of his hand.

    link to this | view in thread ]

  49. identicon
    Marshall Van Alstyne, 17 Aug 2021 @ 9:27am

    A solution with a better critique

    Greetings Mike, I’m a fan of your writings so when they include a critique, I pay attention. Thanks also for acknowledging our prior work and also even prior praise (https://www.techdirt.com/articles/20090219/0248373834.shtml).

    None of your criticisms, however, address the fundamental question: how do you hold a platform accountable for misinformation that it amplifies? The problem with S230 is that by providing (almost) absolute immunity to being an accessory to a crime, it “accessorizes” a lot more crime. The infodemic of antivaxx misinformation is a case in point. Platforms don’t produce this content but they have given it reach and influence and they have monetized the engagement that has attended it.

    Paraphrasing your conclusion, you mostly assert the downside of changing S230 outweighs the upside. Still, you don’t assert that there’s no problem.

    As a tech (or econ or legal) designer, we should always ask the question “can we do better?” Is there a superior design that accomplishes these mutually conflicting goals?

    So let me poke a hole in one of your best arguments, that it’s “impossible to do this well at scale”. We agree that checking every single message just isn’t feasible. But, that doesn’t mean no better design exists. Let me propose one:

    If we recognize the “infodemic” as a pollution problem, then we take statistical samples just like we sample factory air for the presence of sulphur dioxide or water for the presence of DDT. We don’t measure every cubic centimeter of effluent as that’s just not practical. A doctor doesn’t check your cholesterol by checking all your blood, she/he takes a sample.

    The beauty here is that by a property of the central limit theorem from statistics, we can be extremely confident how much pollution afflicts a given platform. Do we want 90% confidence? 95% confidence? 99% confidence? We just take bigger samples to be sure. Even if folks disagree on the falseness or harm of a specific claim, people will agree on average. One study found 95% agreement among fact checking organizations (https://science.sciencemag.org/content/359/6380/1146). In fact, in computer science, it’s possible to create highly accurate assessments with much lower agreement among deciders than this.

    Under a modified S230, with a duty of care, we just hold platforms accountable for pollution levels above a reasonable threshold. Facebook, for example, already reports such things as incidence of cancer misinformation on its platform (https://www.facebook.com/AMJPublicHealth/posts/3316836535095688). Now, we just hold them publicly accountable. This isn’t impossible at all -- we just need to connect the existing dots.

    We’ve tried to think carefully about such issues and avoid polemics. I have a partial working paper “Platforms, Free Speech & The Problem of Fake News” with more nuance (https://www.dropbox.com/s/ypphlhw43efnslj/Platforms%2C%20Free%20Speech%20%26%20the%20Problem%20of%2 0Fake%20News%20v0.3%20-%20dist.pdf?dl=0). Honestly, I have not shared it widely yet outside friends and family as there is much more to be done but this hue and cry prompts me to disclose it earlier than I’d planned. Your further thoughts are welcome and invited.

    To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.

    link to this | view in thread ]

  50. icon
    Toom1275 (profile), 17 Aug 2021 @ 11:32am

    Re: A solution with a better critique

    Arguments against Section 230 that don't lie about it:

    Yep... still #0

    link to this | view in thread ]

  51. icon
    That One Guy (profile), 17 Aug 2021 @ 12:17pm

    What you're having a problem with is the first amendment, not 230. 230 does not 'allow' moderation or a platform to host 'misinformation', the first amendment does, all 230 does ultimately is make it so that platforms can afford to exercise that right and not be sued into the ground because people don't like how they're using it.

    Don't like people saying misinformation then go after them, not the platform hosting them, and let a judge explain to you why you can't do that.

    link to this | view in thread ]

  52. icon
    Toom1275 (profile), 18 Aug 2021 @ 11:59pm

    Re: A solution with a better critique

    To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.

    To be convinced by Mike's critique, you need to (a) understand the subject well enough to comprehend the rebuttal you've been given, and (b) be in acting in good faith. With all due respect you clearly fall short on both counts.

    link to this | view in thread ]

  53. icon
    Mike Masnick (profile), 22 Aug 2021 @ 11:59pm

    Re: A solution with a better critique

    None of your criticisms, however, address the fundamental question: how do you hold a platform accountable for misinformation that it amplifies?

    I've addressed that numerous times. The problem with YOUR piece is it assumes, totally incorrectly, that the only way to hold a platform accountable is... by law. It's not. Users migrating away from garbage dumps, advertisers refusing to advertise next to conspiracy theories, have been shown to be much more effective in pressuring companies to curb their behavior.

    Even more to the point, holding a platform accountable for misinformation is a recipe for disaster. How do you define misinformation? How do you define it in a way that doesn't make mistakes? How do you deal with the information that is inevitably not caught? All you're doing is creating a massive liability minefield. End result? LESS EXPERIMENTATION, LESS INNOVATION, and LESS ABILITY TO ADAPT TO BAD ACTORS. Why would you want to do that?!?

    The infodemic of antivaxx misinformation is a case in point. Platforms don’t produce this content but they have given it reach and influence and they have monetized the engagement that has attended it.

    Fox News, OAN, and Newsmax have given just as much air to those things. Indeed, Yochai Benkler's research shows that the info doesn't go viral on Facebook until after it airs on cable news.

    And YOU haven't answered the more pressing question: what is illegal about antivax misinfo? We agree that it's problematic. But (contrary to what you claim) it's pretty much all constitutionally protected speech. There is no underlying cause of action. "Facebook shouldn't share this" is not a legal argument.

    If we recognize the “infodemic” as a pollution problem, then we take statistical samples just like we sample factory air for the presence of sulphur dioxide or water for the presence of DDT. We don’t measure every cubic centimeter of effluent as that’s just not practical. A doctor doesn’t check your cholesterol by checking all your blood, she/he takes a sample.

    Marshall, this all sounds neat and sciency, but PROTECTED SPEECH IS NOT POLLUTION. And that's where your entire argument breaks down. You can't ignore the fact that we're talking about speech.

    Under a modified S230, with a duty of care, we just hold platforms accountable for pollution levels above a reasonable threshold.

    Too much constitutionally protected speech above a certain level cannot violate the law. That fundamentally sinks your entire argument. You really ought to have spoken to at least someone who understands the 1st Amendment.

    To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.

    You leave out that YOUR suggestion is easily proven as (c) a MUCH worse design with SIGNIFICANT downsides you ignore or don't understand. I made that argument, and I stand by it because it's correct. If it were only (a) and (b) as you lay out then you've done a classic "we must do something, this is something, we will do it," ignoring that your solution will make things significantly worse (as I DID show).

    I agree that there are problems, but you're barking up the wrong tree for a solution.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.