House Democrats Decide To Hand Facebook The Internet By Unconstitutionally Taking Section 230 Away From Algorithms

from the this-is-not-a-good-idea dept

We've been pointing out for a while now that mucking with Section 230 as an attempt to "deal" with how much you hate Facebook is a massive mistake. It's also exactly what Facebook wants, because as it stands right now, Facebook is actually losing users to its core product, and the company has realized that burdening competitors with regulations -- regulations that Facebook can easily handle with its massive bank account -- is a great way to stop competition and lock in Facebook's dominant position.

And yet, for reasons that still make no sense, regulators (and much of the media) seem to believe that Section 230 is the only regulation to tweak to get at Facebook. This is both wrong and shortsighted, but alas, we now have a bunch of House Democrats getting behind a new bill that claims to be narrowly targeted to just remove Section 230 from algorithmically promoted content. The full bill, the "Justice Against Malicious Algorithms Act of 2021 is poorly targeted, poorly drafted, and shows a near total lack of understanding of how basically anything on the internet works. I believe that it's well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities. It's an embarrassing release from four House members of the Energy & Commerce Committee who should know better (and at least 3 of the 4 have done good work in the past on important tech-related bills): Frank Pallone, Mike Doyle, Jan Schakowsky, and Anna Eshoo.

The key part of the bill is that it removes Section 230 for "personalized recommendations." It would insert the following "exception" into 230.

(f) PERSONALIZED RECOMMENDATION OF INFORMATION PROVIDED BY ANOTHER INFORMATION CONTENT PROVIDER.—

‘‘(1) IN GENERAL.—Subsection (c)(1) does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider if—

‘(A) such provider of such service—
‘‘(i) knew or should have known such provider of such service was making a personalized recommendation of such information; or
‘‘(ii) recklessly made a personalized recommendation of such information; and
‘‘(B) such recommendation materially contributed to a physical or severe emotional injury to any person.

So, let's start with the basics. I know there's been a push lately among some -- including the whistleblower Frances Haugen -- to argue that the real problem with Facebook is "the algorithm" and how it recommends "bad stuff." The evidence to support this claim is actually incredibly thin, but we'll leave that aside for now. But at its heart, "the algorithm" is simply a set of recommendations, and recommendations are opinions and opinions are... protected expression under the 1st Amendment.

Exempting Section 230 from algorithms cannot change this underlying fact about the 1st Amendment. All it means is that rather than getting a quick dismissal of the lawsuit, you'll have a long, drawn out, expensive lawsuit on your hands, before ultimately finding out that of course algorithmic recommendations are protected by the 1st Amendment. For much more on the problem of regulating "amplification," I highly, highly recommend reading Daphne Keller's essay on the challenges of regulating amplification (or listen to the podcast I did with Daphne about this topic). It's unfortunately clear that none of the drafters of this bill read Daphne's piece (or if they did, they simply ignored it, which is worse). Supporters of this bill will argue that in simply removing 230 from amplification/algorithms, this is a "content neutral" approach. Yet as Daphne's paper detailed, that does not get you away from the serious Constitutional problems.

Another way to think about this: this is effectively telling social media companies that they can be sued for their editorial choices of which things to promote. If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal, you might quickly recognize the 1st Amendment problems here. I could easily argue that the NY Times' constant articles misrepresenting Section 230 subject me to "severe emotional injury." But of course, any such lawsuit would get tossed out as ridiculous. Does flipping through a magazine and seeing advertisements of products I can't afford subject me to severe emotional injury? How is that different than looking at Instagram and feeling bad that my life doesn't seem as cool as some lame influencer?

Furthermore, this focus on "recommendations" is... kinda weird. It ignores all the reasons why recommendations are often quite good. I know that some people have a kneejerk reaction against such recommendations but nearly every recommendation engine I use makes my life much better. Nearly every story I write on Techdirt I find via Twitter recommending tweets to me or Google News recommending stories to me -- both based on things I've clicked on in the past. And both are (at times surprisingly) good at surfacing stories I would be unlikely to find otherwise, and doing so quickly and efficiently.

Yet, under this plan, all such services would be at significant risk of incredibly expensive litigation over and over and over again. The sensible thing for most companies to do in such a situation is to make sure that only bland, uncontroversial stuff shows up in your feed. This would be a disaster for marginalized communities. Black Lives Matter? That can't be allowed as it might make people upset. Stories about bigotry, or about civil rights violations? Too "controversial" and might contribute to emotional injury.

The backers of this bill also argue that the bill is narrowly tailored and won't destroy the underlying Section 230, but that too is incorrect. As Cathy Gellis just pointed out, removing the procedural benefits of Section 230 takes away all the benefits. Section 230 helps get you out of these cases much more quickly. But under this bill, now everyone will add in a claim under this clause that the "recommendation" cause "emotional injury" and now you have to litigate whether or not you're even covered by Section 230. That means no more procedural benefit of 230.

The bill has a "carve out" for "smaller" companies, but again gets all that wrong. It seems clear that they either did not read, or did not understand, this excellent paper by Eric Goldman and Jess Miers about the important nuances of regulating internet services by size. In this case, the "carve out" is for sites that have 5 million or fewer "unique monthly visitors or users for not fewer than 3 of the preceding 12 months." Leaving aside the rather important point that there really is no agreed upon notion of what a "unique monthly visitor" actually is (seriously, every stats package will give you different results, and now every site will have incentive to use a stats package that lies and gives you lower results to get beneath the number), that number is horrifically low.

Earlier this year, I suggested a test suite of websites that any internet regulation bill should be run against, highlighting that bills like these impact way more than Facebook and Google. And lots and lots of the sites I mention get way beyond 5 million monthly views.

So under this bill, a company like Yelp would face real risk in recommending restaurants to you. If you got food poisoning, that would be an injury you could now sue Yelp over. Did Netflix recommend a movie to you that made you sad? Emotional injury!

As Berin Szoka notes in a Twitter thread about the bill, this bill from Democrats, actually gives Republican critics of 230 exactly what they wanted: a tool to launch a million "SLAM" suits -- Strategic Lawsuits Against Moderation. And, as such, he notes that this bill would massively help those who use the internet to spread baseless conspiracy theories, because THEY WOULD NOW GET TO SUE WEBSITES for their moderation choices. This is just one example of how badly the drafters of the bill misunderstand Section 230 and how it functionally works. It's especially embarrassing that Rep. Eshoo would be a co-sponsor of a bill like this, since this bill would be a lawsuit free-for-all for companies in her district.

Another example of the wacky drafting in the bill is the "scienter" bit. Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful. So in a bill like this, you'd expect that the scienter would require the platforms to know that the information they were recommending was harmful. That's the only standard that would even make sense (though would still be constitutionally problematic). However, that's not how it is in the bill. Instead, the scienter is... that the platform knows they recommend stuff. That's it. In the quote above the line that matters is:

such provider of a service knew or should have known such provider of a service was making a personalized recommendation of such information

In other words, the scienter here... is that you knew you were recommending stuff personally. Not that it was bad. Not that it was dangerous. Just that you were recommending stuff.

Another drafting oddity is the definition of a "personalized recommendation." It just says it's a personalized recommendation if it uses a personalized algorithm. And the definition of "personalized algorithm" is this bit of nonsense:

The term 'personalized algorithm' means an algorithm that relies on information specific to an individual.

"Information specific to an individual" could include things like... location. I've seen some people suggest that Yelp's recommendations wouldn't be covered by this law because they're "generalized" recommendations, not "personal ones" but if Yelp is recommending stuff to me based on my location (kinda necessary) then that's now information specific to me, and thus no more 230 for the recommendation.

It also seems like this would be hell for spam filters. I train my spam filter, so the algorithm it uses is specific to me and thus personalized. But I'm pretty sure that under this bill a spammer whose emails are put into a spam filter can now sue, claiming injury. That'll be fun.

Meanwhile, if this passes, Facebook will be laughing. The services that have successfully taken a bite out of Facebook's userbase over the last few years have tended to be ones that have a better algorithm for recommending things: like TikTok. The one Achilles heel that Facebook has -- it's recommendations aren't as good as new upstarts -- gets protected by this bill.

Almost nothing here makes any sense at all. It misunderstands the problems. It misdiagnoses the solution. It totally misunderstands Section 230. It creates massive downside consequences for competitors to Facebook and to users. It enables those who are upset about moderation choices to sue companies (helping conspiracy theorists and misinformation peddlers). I can't see a single positive thing that this bill does. Why the hell is any politician supporting this garbage?

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: algorithms, anna eshoo, frank pallone, intermediary liability, jan schakowsky, mike doyle, news feeds, personalized recommendations, recommendations, section 230
Companies: facebook, yelp


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 14 Oct 2021 @ 11:03am

    70% of facebook accounts haven't been updated or touched in more than 2 years.

    Facebook is losing Actual users hand over fist and is way smaller than they've been claiming to investors....all v. naughty behaviour.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 14 Oct 2021 @ 11:08am

    One person can launch THOUSANDS of simultaneous lawsuits for $500 each.

    Facebook would need to independently lawyer up for each one.

    Or settle.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 14 Oct 2021 @ 11:20am

    The term 'personalized algorithm' means an algorithm that relies on information specific to an individual.

    How does a search engine avoid risk, as its purpose is to make recommendations using a persons specific search terns?

    link to this | view in thread ]

  4. identicon
    anonymouse, 14 Oct 2021 @ 11:43am

    Lazy

    It reads like a law written by lobbyists.

    link to this | view in thread ]

  5. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:48am

    with a kinnock-uous leader

    testing the blocking

    link to this | view in thread ]

  6. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:49am

    Another day, another MM panic fearing for corporate tyranny.

    Has the hilarious admission that he willingly remains within a propaganda bubble; if not thereby given his topics to re-write in GOOGLE's favor, then at best letting GOOGLE confirm his biases and never allow alternate views.

    link to this | view in thread ]

  7. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:49am

    Re: Another day, another MM panic fearing for corporate tyranny.

    Going by that you hold on to only a couple dozen fanboys, you should try other sources. Quit re-writing elitist, Ivy League, NYT/LATimes/WaPo propaganda. -- But no, your quote and contradict technique pretending to "analyze" other views isn't adequate, either. -- You need substance and originality, but of course don't dare stray far from the safety of your silly little neo-liberal clique.

    link to this | view in thread ]

  8. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:50am

    Re: Another day, another MM panic fearing for corporate tyranny.

    You pretend to offer a discussion forum here, but then disadvantage / discriminate against any other than full-blown corporatist views. (BTW: your fanboys sniping at generic corporations while letting YOU put out explicitly pro-corporate shilling is one of my favorite aspects of Techdirt. That dissonance points up, doesn't cover, your corporatism.)

    link to this | view in thread ]

  9. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:50am

    Re: Another day, another MM panic fearing for corporate tyranny.

    You also fear "conspiracy theories" getting notice, because... Well, YOU TELL US: WHY? What reason have you for this free-floating contextless fear of the mere thought of alternate explanations? What concern of yours is affected? What justifies corporations arbitrarily suppressing quite popular views? Do you just dismiss any view which isn't approved by The Establishment / globalists? 'SPLAIN why you think "conspiracy theories" are necessarily wrong and bad, you nasty little globalist PUNK.

    link to this | view in thread ]

  10. This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 14 Oct 2021 @ 11:51am

    Smarter Than I Thought

    If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal

    That's okay because these news outlets are publishers, and not platforms. The editors get the choice about what to promote, but they are also liable for what they publish. I have to give the drafters some credit here -- it looks as if they understand at least some of the publisher/platform problem. The 1996 CDA was a compromise bill, so perhaps there is room for the two sides to come together.

    Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful.

    Just like how the cigarette industry knew it was selling a carcinogenic product, even though they went along begrudgingly with the warning labels, and was found liable for causing harm. Social media: the tobacco product of the internet.

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 14 Oct 2021 @ 11:51am

    Thanks congress. Since online ads are 'personal recommendations' made by some algorithm and since online ads cause me 'severe emotional injury', I will soon be able to sue the online ad industry out of existence.

    link to this | view in thread ]

  12. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:51am

    Re: Another day, another MM panic fearing for corporate tyranny.

    And of course your main purpose here as always is to CONTINUE CORPORATE CENSORSHIP against We The People merely wanting to put out ORDINARY views.

    link to this | view in thread ]

  13. This comment has been flagged by the community. Click here to show it
    identicon
    I. Kinnock, 14 Oct 2021 @ 11:53am

    Re: with a kinnock-uous leader

    Shields are down today! Bet thought I wouldn't be back for a while since in yesterday.

    But it's an exception to the near-total blocking of last several months, esp on MM pieces.

    link to this | view in thread ]

  14. icon
    That One Guy (profile), 14 Oct 2021 @ 12:07pm

    I believe that it's well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities.

    Which if anything just makes it worse than if they'd proposed it with malice, because they could have had their intentions match their actions but didn't.

    Whether you punch someone in the face with the best of intentions because you were in a hurry and didn't bother to check whether it actually would help or you did so maliciously you still punched someone in the face and they still have to deal with that.

    They have proposed a bill that ignores the advice and expertise of people who have knowledge in the field and one that stands to do enormous damage to smaller platforms and entrench the current top ones, until and unless they pull the bill and admit that it's a terrible idea they should be treated no differently than if they'd proposed it maliciously and be raked across the coals for it just the same.

    link to this | view in thread ]

  15. identicon
    Anonymous Coward, 14 Oct 2021 @ 12:10pm

    JAMAA-king Me Crazy

    I don't think they understand how much content is curated and suggested by machine. This would have effects on everything from email contact lists to grocery store flyers.

    link to this | view in thread ]

  16. icon
    That One Guy (profile), 14 Oct 2021 @ 12:14pm

    Re:

    Uhh, no? Pretty sure anyone who tried that stunt would very quickly find themselves benchslapped and told consolidate their claims.

    link to this | view in thread ]

  17. icon
    That One Guy (profile), 14 Oct 2021 @ 12:19pm

    Re: JAMAA-king Me Crazy

    Yup, say goodbye to personalized(and therefore useful and relevant) recommendations and hello to the equivalent of hitting 'show random' on platforms everywhere.

    link to this | view in thread ]

  18. identicon
    Anonymous Coward, 14 Oct 2021 @ 12:29pm

    Re:

    You'd need to be on Zuckerberg's wealth level to personally launch thousands of suits for $500 each, especially if FB decides "nah we don't wanna settle, let's tussle." And collectively, each person is also liable to run into FB deciding they can outlast the individual litigant.

    link to this | view in thread ]

  19. icon
    ECA (profile), 14 Oct 2021 @ 12:44pm

    this is way out there.

    So lets ask about Algorithms?
    Which ones?
    Suggested friends? based on the info you place in your account?
    Your school, your work, your home town, where you live now?
    I never fill that crap in. and HOPE no one else does.
    On FB you can block just about anyone, including Some adverts.
    So, most of this is based on the idea that YOU dont know how to use the site and BLOCK OR REPORT someone for picking on you?

    Others Algorithms.
    Amazon, and Tons of Sale sites use them base on what you Buy and What you have looked at on the site. ALL the major sale sites from amazon to 'Whats the name of the restaurant?' are SHARING your data. Even amazon is a FRONT for millions of other sites for a % of the sales. Even walmart does it.
    So, reading this

    "does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider "

    So, info of a sale or where you were looking at a site, isnt protected when its referred to another Service? to show you were shopping, looking around for ? Condoms?
    Which would embarrass you All to hell when the next site you goto, pops up all these adverts for Condoms?

    Anyone got a definition of being an ADULT? Or is this something thats supposed to protect our kids, but ends up treating us as Idiot adults?

    Can we extend this, and carry it to OTHER services? Like Cable TV? Like Roku?
    Isnt there a LAW about laws NOT being for an individual group or person?
    How Thick Can we spread this butter across the Whole advert system?
    How many Conglomerates are a series of business's interlinked? Even Macy's is part of this system. 4-5 levels of sales, as something dont sell at 1 set of stores you pass it down and write it off(wow, what a way to kill the tax system). And you are getinig adverts from each of those stores in your area.

    "information provided through such service by another information content provider"

    It affects the 3rd party, the one that intercepts the information, then uses it. Seems not to affect the primary site that gathered the info.

    BE parts?
    "knew or should have known."

    "recklessly made a personalized recommendation of such information; and
    ‘‘such recommendation materially contributed to a physical or severe emotional injury to any person."

    Algorithms DONT MAKE PERSONALIZED ANYTHING. They look at the data and say
    ' Wow, this person is looking at allot of porn' Equals 'I should show them more porn'.
    'Wow this person was born ?" equals 'I should Post to everyone in that town where this person is'.
    'Wow, you work at blank' Equals 'I should contact all those people and let them know you are using this or that service'
    'Wow, you drink allot of alcohol' equals 'I should tell everyone what you drink and that you like it allot'

    Computers are better at assumptions NOT analytical analysis.

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 14 Oct 2021 @ 12:47pm

    Re: Re: Another day, another basement

    They say the real terror is a boot stomping on a face for ever. But I think it's blueballs getting rejected by Techdirts spam filter until the sun goes supernova.

    link to this | view in thread ]

  21. identicon
    Anonymous Coward, 14 Oct 2021 @ 12:53pm

    Re: Re: Re: Another day, another basement

    Also forever, as the sun is not going to do that.

    link to this | view in thread ]

  22. identicon
    Anonymous Coward, 14 Oct 2021 @ 12:54pm

    Re: Smarter Than I Thought

    Social media: the tobacco product of the internet.

    The comments being censored are the strongest

    If the digital forum becomes large enough it becomes a public forum.

    Funny how the talking points your russian handlers give you seem to change regularly.

    It's like each of your previous <gotcha> talking points have been so thoroughly debunked, over and over and over again, that you have to keep changing it up so as to not sound like the complete idiot that most of us see you as.

    Any way, remember that time you though Facebook could use §230 to dismiss a lawsuit against Facebook's own speech? That shows everybody here how little you know about section §230, the 1st amendment and internet moderation in general.

    link to this | view in thread ]

  23. icon
    Strawb (profile), 14 Oct 2021 @ 12:54pm

    Re: Smarter Than I Thought

    That's okay because these news outlets are publishers, and not platforms. The editors get the choice about what to promote, but they are also liable for what they publish. I have to give the drafters some credit here -- it looks as if they understand at least some of the publisher/platform problem.

    Given that that problem is completely made up and doesn't exist in section 230, that's not something they should get credit for.

    link to this | view in thread ]

  24. identicon
    Anonymous Coward, 14 Oct 2021 @ 1:12pm

    a service could base recommendation s on town or citys so its not personalised to one user, or part of a sate,east texas,west texas,etc but its a disaster of a bill cos it enables anyone to sue over a item of news, or a video,
    say i young person got shown a video about police brutality or someone being attacked they could sue for trauma .
    of course small startups cant afford even a few legal actions while facebook
    has almost unlimited resources ,this would be really bad for tik tok as its based on showing every user different videos based on what videos they watch .the people who wrote this bill dont understand how section 230 is absolutely vital to free speech and the survival of smaller websites that have a minority audience ,eg asian americans, lgbt groups

    link to this | view in thread ]

  25. icon
    ECA (profile), 14 Oct 2021 @ 1:13pm

    Re: Smarter Than I Thought

    Add to that.
    Plastic corps knowing that MOST of the plastics will never degrade before our great grand children die.
    Or PTFE Used as a Nonstick material in Tons of things. But has a half life longer then MOST Radioactive materials.
    https://www.youtube.com/watch?v=9W74aeuqsiU&t=3s
    How about the right to have an opinion, over reporting the news? That you dont have to tell the truth or just the facts you know, but can add additional idiocy.
    When the thought of capitalism is based on the consumer having the ability to NOT use a service they dont like, and HAS A CHOICE, ISNT a fact.
    Where the Supply of a service is based on the idea that A' Service Is being supplied auxiliary to the Main/original one. And that NEW service is a Sat signal with a 3-7 second delay. ITS NOT COMPARABLE.
    When the wages of the top exec's is MORE then paying off the stocks or even giving dividends to those that Bought the stocks.

    A good share of the system is broken, and our Gov. is 1/2 the problem. AND we are the Gov., supposedly.

    link to this | view in thread ]

  26. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 14 Oct 2021 @ 1:17pm

    The harm would have to be internet-related, i.e., defamation.

    BTW thanks for censoring me. It's noticed in Washington. You're helping my side more than you could ever imagine remember they can see the post being made from this end, censor.

    link to this | view in thread ]

  27. icon
    That Anonymous Coward (profile), 14 Oct 2021 @ 1:31pm

    lightbulb moment

    That's why the country is so fucked...
    Your leaders "think" they understand things & no one has the courage to tell them they are full of shit.

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 14 Oct 2021 @ 2:43pm

    Re: Re: Another day, another MM panic fearing for corporate tyra

    Sir, this is an Arby's.

    link to this | view in thread ]

  29. icon
    That One Guy (profile), 14 Oct 2021 @ 2:56pm

    Re:

    It's even worse than that as even when people do tell them they're full of shit far too often the response is to 'stand up to the experts' and act as though the people with knowledge in the field have no idea what they're talking about unless they are agreeing with the politician.

    link to this | view in thread ]

  30. identicon
    Anonymous Coward, 14 Oct 2021 @ 3:50pm

    Re: Re:

    Makes one wonder why they even bother asking for said experts if they're just gonna ignore them because they dare say the issue is more nuanced and complex than "This is good and this is bad".

    I'm willing to give them the benefit of the doubt that they drafted this bill with good intentions but when experts are yelling that this is not the right way to go about it yet plow ahead regardless well as the saying goes: "The road to hell is paved with good intentions."

    link to this | view in thread ]

  31. identicon
    Fizzlepop Berrytwist, 14 Oct 2021 @ 4:39pm

    Re: Re:

    Problem is, there will be a lot of lawsuits. Sure, they have a lot of money, but they will get a lot of lawsuits.
    Facebook wants 230 gone, but soon, they'll realize the truth.

    link to this | view in thread ]

  32. identicon
    Fizzlepop Berrytwist, 14 Oct 2021 @ 4:40pm

    So, when should we expect the vote??

    link to this | view in thread ]

  33. identicon
    Fizzlepop Berrytwist, 14 Oct 2021 @ 4:41pm

    Re:

    Or a thousand people.

    link to this | view in thread ]

  34. icon
    That One Guy (profile), 14 Oct 2021 @ 4:53pm

    Re: Re: Re:

    It's probably a mix of honesty thinking that they're right and being surprised when the experts tell them that no, they most certainly are not, and dishonestly aiming to exploit the experts, either boasting about how even the experts agree with them or derogatorily dismissing the 'know-it-all who clearly don't have any real learnin' ' if the experts don't agree with them.

    link to this | view in thread ]

  35. icon
    That One Guy (profile), 14 Oct 2021 @ 5:02pm

    Re: Re: Re:

    The truth is that they can handle way more lawsuits than their competitors and they'll be in an even better position when those competitors fold under the legal avalanche and they're the only viable option left and don't have to worry about any others springing up to challenge them.

    Dealing with a bunch of SLAM's might be a pain but gutting the industry you're in and ensuring that you're the only viable option is priceless, and I'm sure Facebook will be happy to make that trade.

    link to this | view in thread ]

  36. identicon
    Fizzlepop Berrytwist, 14 Oct 2021 @ 5:16pm

    Re: Re: Re: Re:

    Dealing with a bunch of SLAM's might be a pain but gutting the industry you're in and ensuring that you're the only viable option is priceless, and I'm sure Facebook will be happy to make that trade.

    But, once those competitors fold, they'll end up being the lone target for SLAM's.

    link to this | view in thread ]

  37. identicon
    Anonymous Coward, 14 Oct 2021 @ 5:17pm

    Re: Re:

    Unless they're the lone one.

    link to this | view in thread ]

  38. identicon
    Fizzlepop Berrytwist, 14 Oct 2021 @ 5:18pm

    Re:

    Exactly.

    link to this | view in thread ]

  39. icon
    ECA (profile), 14 Oct 2021 @ 5:33pm

    Re:

    any time there is controversy.
    Look for a Low period, or busy period, and no one is paying attention.
    HOLIDAYS.

    link to this | view in thread ]

  40. identicon
    Anonymous Coward, 14 Oct 2021 @ 5:51pm

    How likely is the bill to pass?

    link to this | view in thread ]

  41. identicon
    Anonymous Coward, 14 Oct 2021 @ 5:55pm

    Re: Re:

    Good things to watch on.

    I'll keep those three in mind.

    link to this | view in thread ]

  42. identicon
    Anonymous Coward, 14 Oct 2021 @ 6:12pm

    Re: Re: Re:

    Is it likely to pass?

    link to this | view in thread ]

  43. identicon
    Anonymous Coward, 14 Oct 2021 @ 7:45pm

    Re: Re: Re: Re:

    Nope.
    We'll have to wait and see, though.

    link to this | view in thread ]

  44. identicon
    Anonymous Coward, 14 Oct 2021 @ 9:17pm

    But at its heart, "the algorithm" is simply a set of recommendations, and recommendations are opinions and opinions are... protected expression under the 1st Amendment.

    This reasoning doesn't add up for me. "The algorithm" (assuming we are speaking of a news feed "recommendation algorithm") is not the recommendation. The algorithm produces recommendations (in a sense), but also does considerably more than that. Specifically, it makes decisions about who does and doesn't see a particular piece of content, and also about how many people see it. "The algorithm" determines which ideas get popular currency, and which are ignored.

    I can't speak for anyone else who wants to see news feed algorithms regulated, but for me it has nothing to do with the content being recommended, and everything to do with the power to determine what content is popularized. The power of mass influence is what needs regulation, and the nexus of that power is news feeds (and also advertising algorithms).

    Even to the extent that algorithms produce recommendations (which again, is only a part of what algorithms do), saying that recommendations are protected opinions also doesn't make sense. When a piece of content is recommended on my news feed, whose opinion is it? If the algorithm produced the recommendation, surely the most natural answer is that the opinion belongs to the algorithm. Which is nonsense; algorithms don't have beliefs or opinions, they have inputs and outputs.

    Even granting the nonsensical idea that an algorithm is expressing an opinion, saying that opinion is protected by the first amendment is equally nonsensical. The first amendment applies to speech by natural persons, not algorithms. There is no conceivable reason why an algorithm's freedom of speech needs to be protected, and certainly no reason why the authors of the first amendment would have intended it that way.

    The recommendations in a news feed have similar status to copyright in photos taken by a monkey. A monkey cannot obtain copyright in a photo because copyrights are created for human benefit, and there is no policy benefit to granting ownership to non-humans. Similarly, an algorithm is not entitled to first amendment protection because such protection is intended to protect the freedom of expression of humans, and there is no policy benefit to granting it to algorithms.

    I'll reluctantly concede that corporate speech is protected, but even if the recommendations contained in a news feed can be construed as protected opinion (a stretch for me, but I'll grant it for the sake of argument), what's protected is the expression of that opinion, not how those opinions are produced. A natural assumption behind protecting opinion is that opinions are "reasoned beliefs". They are protected because allowing for a variety of beliefs and reasoning is beneficial to society.

    A company that chooses to express recommendations wholesale via a newsfeed is not expressing reasoned beliefs. It is making available the results of an algorithm. The algorithm itself is neither recommendation nor opinion, and I see no reason why the first amendment should apply to it.


    Beyond all that ... why should Section 230 have anything at all to do with algorithms? Excluding algorithmic recommendations from Section 230 seems pointless to me. Surely they are already excluded from Section 230, on the basis that they are generated by the company, not its users.

    As I understand it Section 230 protects companies from liability over user content. But, news feeds are not user content. News feed recommendations are produced by companies, not users. And, as per the roommates.com decision, companies are already liable for content that they produce. So when companies "express" the recommendations produced by news feed algorithms, they are already liable for those recommendations. Carving algorithms out from Section 230 doesn't change a damn thing as far as I can tell.

    I suppose it might make companies liable for content generated by bots, which I guess could be a problem (it makes spam filtering problematic), but that's pretty distant from either the intent behind the legislation or Mike's handwringing over it.

    link to this | view in thread ]

  45. identicon
    Anonymous Coward, 14 Oct 2021 @ 9:35pm

    Another Really Inept Moderation Attempt

    If they had anyone who could explain actual ARIMA Methods, they would still lack the training (6+ years of university maths) and possibly the wit to grasp the modelling techniques implemented in these algorithms. As usual, legislation this ignorantly designed and deployed will fail its intent and enact any number of strange and undesirable unintended consequences. Yay, Democrats! Just when I had a remote hope of a rescue from GOP insanity, you go full retard.

    link to this | view in thread ]

  46. identicon
    Anonymous Coward, 14 Oct 2021 @ 10:04pm

    Re:

    Your posts showed up. Therefore, you're not being censored.

    link to this | view in thread ]

  47. icon
    That Anonymous Coward (profile), 14 Oct 2021 @ 10:24pm

    Re: Re: Re:

    "give them the benefit of the doubt that they drafted this bill with good intentions"

    It most likely was ghost written by someone who put money in a PAC for them.

    I'd really love to bring back horsewhipping the liars...
    One we'd need a lot of whips to get caught up, but one has to wonder if them seeing that if you lie we'll whip you might change some of them...
    mental images of some members of Congress fibbing about little things because its cheaper than paying the professional they normally get whipped by

    link to this | view in thread ]

  48. icon
    That One Guy (profile), 14 Oct 2021 @ 11:21pm

    Re:

    I can't speak for anyone else who wants to see news feed algorithms regulated, but for me it has nothing to do with the content being recommended, and everything to do with the power to determine what content is popularized. The power of mass influence is what needs regulation, and the nexus of that power is news feeds (and also advertising algorithms).

    Giving your opinion is absolutely protected activity, whether you're talking to one person or one million, doing it directly or through a system you run. That aside I would be very leery of opening the can of worms that is regulating the ability to influence others as I can explain why it's a terrible idea in two words: 'Fake news'. Open that can and it's not a question of will someone you vehemently disagree with get and use the power that would grant but how quickly.

    Similarly, an algorithm is not entitled to first amendment protection because such protection is intended to protect the freedom of expression of humans, and there is no policy benefit to granting it to algorithms.

    Who do you think creates and tweaks algorithms, because I'm pretty sure we haven't quite reached the point of digital sentience where computers are doing thing entirely on their own. Humans are the ones coding the algorithms and deciding how they treat the content they are tasked to handle, saying that the algorithm and it's output doesn't deserve first amendment protection is saying that the humans that run it don't.

    link to this | view in thread ]

  49. icon
    That One Guy (profile), 14 Oct 2021 @ 11:24pm

    Re: Re: Re: Re: Re:

    Sure but at that point they'll have significantly more power and money, more than enough to either take the hits or buy off a few politicians to tweak the law and give them a loophole to exploit.

    link to this | view in thread ]

  50. icon
    Strawb (profile), 14 Oct 2021 @ 11:34pm

    Re:

    A recommendation algorithm is an automated way for a company to say "We think you'll like these things based on what you've picked/searched for/watched/listened to/etc. in the past.

    Moreover, a quick Google search tells me that computer code counts as protected speech based on Bernstein v. Justice Department.

    link to this | view in thread ]

  51. icon
    PaulT (profile), 14 Oct 2021 @ 11:52pm

    Re:

    "70% of facebook accounts haven't been updated or touched in more than 2 years."

    That sounds like an interesting piece of information that would be fascinating to see into further and see what methodology and data was used to confirm this, considering you claim that you're not using the public data provided by Facebook themselves.

    Please, link to the study so that I can research this further!

    link to this | view in thread ]

  52. icon
    Scary Devil Monastery (profile), 15 Oct 2021 @ 12:37am

    Re: Re:

    Well, to be fair I'd have guessed it was far more than 70%. Unless an account - on any platform - is deliberately banned or purged, it'll stick around forever. Users, meanwhile, move on. Blizzard kept bragging about their online WoW user base of twelve million until it was shown that some 90% of those accounts had been idle for years.

    If Facebook only has 70% of it's user account base idling that's actually a pretty good ratio.

    link to this | view in thread ]

  53. icon
    Scary Devil Monastery (profile), 15 Oct 2021 @ 12:39am

    Re:

    "One person can launch THOUSANDS of simultaneous lawsuits for $500 each."

    Nope. I mean, you might be able to if it was about a copyright claim, because the DMCA is funny (read: Broken) that way.

    But for any real court case, even under US tort law, that's just a quick way to hand Facebook all your money in countersuits won by walk-over.

    link to this | view in thread ]

  54. icon
    Scary Devil Monastery (profile), 15 Oct 2021 @ 12:47am

    Re: Re: JAMAA-king Me Crazy

    "say goodbye to personalized(and therefore useful and relevant) recommendations and hello to the equivalent of hitting 'show random' on platforms everywhere."

    Annoying in itself but not the worst part of this bill. May I perhaps introduce you to the Thin End Of The Wedge?

    Because what this means is that any platform without Facebook's legal team will have to serve their users a horrible bullshit concoction of recommendations which will contain the most outrageous garbage vested interests saw fit to serve - from Viagra ads to Klan and neo-nazi propaganda. I wouldn't be surprised to see the nigerian prince coming back strong in random clickbaits either.

    Meanwhile Facebook will still be able to fend off the worst morons and maintain a sanitized environment, presenting a case for congress that "see? The internetz didn't break! You should roll this out across the board!". And the US online environment is reduced to dregs.

    Future alternatives will be coming from China, no doubt, which won't mind helping the gwailo out with miraculous social platforms catering to every desire, so long as said gwailo don't mind uncle Xi listening in on everything they do...

    link to this | view in thread ]

  55. icon
    PaulT (profile), 15 Oct 2021 @ 12:51am

    Re: Re: Re:

    "I'd have guessed"

    I don't care about guesses. If someone's going to claim a number, I want a source for that number, especially if the person claiming it is saying that he's arriving at it without Facebook's publicly released information.

    "Blizzard kept bragging about their online WoW user base of twelve million until it was shown that some 90% of those accounts had been idle for years."

    Define "idle" - were the people still paying the subscription and just not playing, or were they disabled?

    "If Facebook only has 70% of it's user account base idling that's actually a pretty good ratio."

    Again, define "idling". Accounts that haven't been logged into for a week? A month? A year? Does logging in count or do you only count posting? Do people who mainly use Instagram or TikTok but have them set up to share posts on Facebook count, or does it only count if they log in directly? What about people who use their Facebook account purely to log into other sites, is that active or not?

    There's a lot of questions here, which is why I'm asking for a source other than "AC's anus"...

    link to this | view in thread ]

  56. identicon
    Anonymous Coward, 15 Oct 2021 @ 5:35am

    Re: Re:

    The power of mass influence is what needs regulation, and the nexus of that power is news feeds (and also advertising algorithms).

    The nexus of that power is the news media, print radio and T.V. who are very selective about what stories they publish, anf who often put a slant on the stories to suite their political aims.

    link to this | view in thread ]

  57. icon
    Samuel Abram (profile), 15 Oct 2021 @ 6:43am

    Re: Re: Re: Another day, another MM panic fearing for corporate

    In that case, I'd like to order a double with horsey sauce combo.

    link to this | view in thread ]

  58. icon
    techflaws (profile), 15 Oct 2021 @ 8:50am

    "Justice Against Malicious Algorithms Act of 2021"

    If the gradstanding is already in the title you know it can't be good.

    link to this | view in thread ]

  59. identicon
    Anonymous Coward, 15 Oct 2021 @ 10:12am

    Re:

    I always operate under the assumption national lev.politicians are willfully ignorant at best because of the sheer cutthroat competition to get there.

    link to this | view in thread ]

  60. identicon
    Anonymous Coward, 15 Oct 2021 @ 10:18am

    Re: Re: Re:

    Plus "you are free to speak but not to be listened too" isn't free speech. Saying there should be limits to mass communication is saying that the problem is that too many people might listen to what you have to say!

    That is the sort of thing only a tyrant would call a problem. But people have become so goddamned stupidly reactionary even among so-called progressive complaining about a lack of manufactured consent is a mainstream opinion! That is what "we are too divided" really means.

    link to this | view in thread ]

  61. icon
    nasch (profile), 15 Oct 2021 @ 1:32pm

    Re:

    A company that chooses to express recommendations wholesale via a newsfeed is not expressing reasoned beliefs. It is making available the results of an algorithm. The algorithm itself is neither recommendation nor opinion, and I see no reason why the first amendment should apply to it.

    You're either talking about regulating the expression of the results of the algorithm, which would be a first amendment issue because the government is not supposed to regulate speech whether it's a political opinion or a dick joke or a company saying "this is what our algorithm thinks you will be interested in" - or you're talking about regulating what the algorithm itself does. Which would also be a first amendment issue, because a human wrote that algorithm, and the government is not supposed to regulate what people write, whether it's on a protest poster or typed into a computer to make software.

    link to this | view in thread ]

  62. icon
    ECA (profile), 15 Oct 2021 @ 5:36pm

    Re: Re: Re: Re:

    There is a trick they love.
    To debate at nite when no one is there.
    If a Quorum, isnt needed, then Just ask for a vote in the middle of the nite, with selected persons there.
    Its been done MANY TIMES.

    http://westwing.bewarne.com/discontinuity/government.html

    https://www.youtube.com/watch?v=CO4 JpVBUu80&t=4s
    Look at background. Wish there were a clock showing THE current time this was done.

    link to this | view in thread ]

  63. identicon
    Anonymous Coward, 15 Oct 2021 @ 8:45pm

    Re: Re: Re:

    The nexus of that power is the news media, print radio and T.V. who are very selective about what stories they publish, anf who often put a slant on the stories to suite their political aims.

    I agree, but the two nexii are not mutually exclusive. Newsfeeds and newsrooms both have agenda-setting effects. They are also different, and shouldn't necessarily be regulated under the same laws. News media is at least arguably driven by human editorial decisions that are clearly protected first amendment activity, because ultimately we want people to be able to express political viewpoints. The same cannot be straightforwardly said of newsfeed algorithms.

    link to this | view in thread ]

  64. identicon
    Anonymous Coward, 15 Oct 2021 @ 8:58pm

    Re: Re:

    Giving your opinion is absolutely protected activity, whether you're talking to one person or one million

    Giving your opinion is free speech. Having it heard by millions is not.

    I am completely against regulating algorithms in ways that would censor speech. The regulations we need should be content agnostic, meaning we should not be regulating what content gets recommended.

    What needs regulating is two things:
    a) targeting (i.e. who receives what recommendations, and how is that determined), and
    b) virality (i.e. how many people see a given piece of content, and perhaps placing global limits on how much any given piece of content can be recommended.)

    Neither of these things implicates freedom of speech if done properly. Popularity of speech is not a right, and neither is a guaranteed listener.

    Regulating newsfeed algorithms is (or should be) about regulating how audiences are formed, not about what speech is shared.

    link to this | view in thread ]

  65. identicon
    Anonymous Coward, 15 Oct 2021 @ 9:06pm

    Re: Re:

    That aside I would be very leery of opening the can of worms that is regulating the ability to influence others as I can explain why it's a terrible idea in two words: 'Fake news'. Open that can and it's not a question of will someone you vehemently disagree with get and use the power that would grant but how quickly.

    We are already there. That can is open and the worms are gone. That is why we need regulation. Right now, the decisions about what is and isn't fake news, and who gets to decide, are being made in ways that are unaccountable to anyone (with the possible exception of corporate shareholders).

    Unfortunately, regulation is a job for government. It may by that the US needs a functional, honest government before we get the regulation we need (don't laugh), but that doesn't abrogate the fact that regulation is needed. We need the government to create some accountability in a way that is non-partisan, fair, and truthful. However unrealistic that looks, we need the government to be the holder of that power, not corporations.

    link to this | view in thread ]

  66. identicon
    Anonymous Coward, 15 Oct 2021 @ 9:18pm

    Re: Re:

    Moreover, a quick Google search tells me that computer code counts as protected speech based on Bernstein v. Justice Department.

    Unless companies are publishing the source code for their algorithms, I don't see why Bernstein would apply. Code may be speech, but the result of running that code is not.

    A recommendation algorithm is an automated way for a company to say "We think you'll like these things based on what you've picked/searched for/watched/listened to/etc. in the past.

    Yes, but the protected part is the "We think..." part, not the algorithm part. It becomes protected speech when the company endorses and (for lack of a better word) publishes it. It becomes corporate opinion when the company runs the algorithm to produce a recommendation not when the algorithm is coded.

    link to this | view in thread ]

  67. icon
    That One Guy (profile), 15 Oct 2021 @ 9:44pm

    Re: Re: Re:

    Consolidating the replies to both into one comment for ease of reading.

    Giving your opinion is free speech. Having it heard by millions is not.

    It's the same bloody thing, the same action does not go from protected speech to not protected simply because the audience increased.

    Neither of these things implicates freedom of speech if done properly. Popularity of speech is not a right, and neither is a guaranteed listener.

    Neither popularity nor an audience are rights under the first amendment or free speech in general but the ability to gather those(within certain restrictions like not using someone else's property to do so) very much are.

    Both of those are very much out of bounds, the government deciding who you are allowed to say certain things to and how many people are allowed to be in that group are both pretty blatant violations of the first amendment in the form of dictates relating to speech.

    We are already there. That can is open and the worms are gone.

    Oh? I wasn't aware that the government was already in the business of issuing legal penalties against those that they disagreed with. Strange that, you'd think that would have made a bigger splash especially in the last four years when it was headed by someone who would have loved the ability to go after anyone spreading 'fake news'.

    Right now, the decisions about what is and isn't fake news, and who gets to decide, are being made in ways that are unaccountable to anyone (with the possible exception of corporate shareholders).

    Curse those people making use of their first amendment rights in ways you don't agree with, those fiends.

    There is a big difference between a person and/or a privately owned platform deciding to host or not host certain content, and choosing how to present that content due to their biases and positions and the government stepping in and dictating what can be said, how it can be said and how many people are allowed to listen.

    We need the government to create some accountability in a way that is non-partisan, fair, and truthful.

    Yeah, we already have those, they're called defamation and liability laws, for when people go a little overboard in their claims and I'm not sure if you've noticed but they don't always work out so well currently, they certainly don't need to be expanded.

    However unrealistic that looks, we need the government to be the holder of that power, not corporations.

    Yes, what could go wrong with the government being able to dictate how many people you're allowed to speak to and who is allowed or required to be in that group?

    link to this | view in thread ]

  68. identicon
    Anonymous Coward, 15 Oct 2021 @ 11:07pm

    Re:

    The harm would have to be internet-related, i.e., defamation

    How's that Paul Hansmeier fund coming along, John Smith? I hear he failed his bid to sue the DOJ. What's wrong with you?

    link to this | view in thread ]

  69. icon
    Toom1275 (profile), 16 Oct 2021 @ 1:49am

    Re:

    [Hallucinates events not in reality]

    link to this | view in thread ]

  70. identicon
    Anonymous Coward, 16 Oct 2021 @ 2:08am

    Re: Re: Re: Re:

    Also part of the problem is politicians pushing controversial view such as Texas school administrator tells teachers to provide "opposing perspective" to books about the Holocaust. How can you have critical thought where that and intelligent design is pushed onto children. That is teaching them that all viewpoints are equally valid.

    link to this | view in thread ]

  71. icon
    That One Guy (profile), 16 Oct 2021 @ 1:09pm

    Re: Re: Re: Re: Re:

    That certainly explains why they went out of their way to explicitly make holocaust denialism a moderation-exempt category in the semi-recent bill, it would be rather awkward if children were taught that there were 'good people on both sides' when it came to the holocaust only to look online and see that there very much were not.

    link to this | view in thread ]

  72. identicon
    Anonymous Coward, 17 Oct 2021 @ 4:24pm

    Re: Where all the lawsuits you promised me bro?

    Cool story bro.

    link to this | view in thread ]

  73. icon
    PaulT (profile), 18 Oct 2021 @ 1:11pm

    Re: Re: Re: Re:

    "Plus "you are free to speak but not to be listened too" isn't free speech"

    Yes it is. You still had your freedom to speak, you just didn't have a guaranteed audience. Which has never been something that was promised to you. The only guarantee you have is that the government is not allowed to shut you down, not that the rest of the public is not allowed to decide not to listen to you.

    link to this | view in thread ]

  74. icon
    Scary Devil Monastery (profile), 20 Oct 2021 @ 3:03am

    Re: Re: Re: Re:

    "Define "idle" - were the people still paying the subscription and just not playing, or were they disabled?"

    As I recall it concerned mainly F2P accounts - but don't quote me on that because it might very well be subscriptions on hiatus for years.

    "Again, define "idling". Accounts that haven't been logged into for a week? A month? A year?"

    As you clearly noted right below that question...it doesn't get an answer before you supply proper context. I'm not sure I'd call an FB account only used to set up other website accounts "active" for instance.

    "There's a lot of questions here, which is why I'm asking for a source other than "AC's anus"..."

    He expresses himself a bit too certain, sure. But he does have a valid point of assumption - because abandoned accounts have always been the majority of account bases for every online service since the early days of usenet. Unless a service regularly purges accounts who've been inactive for X time it's a given that the dead accounts will heavily outnumber the active ones on any matured online service.

    That this is a default state of affairs is pretty well established by now. That facebook should make out the sole exception to this would be odd enough that is the assertion I'd demand evidence for.

    However, a quick google provided me with a good google page to start - Query term; "Ghost Town? Study Says 70 Percent Of Facebook Pages Are Inactive"

    Which references a study reference to recommend.ly's "facebook pages usage patterns".

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.