And even if they "deleted" it, to actually get rid of it they would have to go into the source control and remove it (considered bad practice and only to be done in extreme cases), and also delete it from all their backups (probably easier said than done).
So it really doesn't matter how motivated the advertisers are, because they aren't a unified front.
That still understates the problem. Even if every advertiser were in complete agreement about what they wanted to appear on a platform, and not appear, it would still not be possible to moderate a platform like Facebook perfectly.
"people are tired of lies being given equal weight in a time when we're seeing the negative effects of not taking action decades ago as a direct result of the lies."
There are two points here:
1) Giving the government the power to decide what is true and what is false, and disallow false speech, is quite dangerous
AND
2) Non-government actors, such as media outlets, have no obligation to be neutral on questions of fact, and do harm by pretending baseless lies are on an equal footing with scientifically sound findings
What do you mean by a subscription model? If it's just revenue from subscriptions instead of ads, I don't see how that solves the moderation issue, unless it's by drastically reducing the number of users. Which kind of misses the point of the issue of moderation at scale.
“I made the video because obviously it was trending. Mostly, it’s for internet clout. And to be funny. It’s not a ‘fitting in’ type of thing. It’s literally just for clout, to show off or … whatever.”
It's not a fitting in thing, it's just because everyone else is doing it, and I want them to like me, and to show off for my friends.
Moderation at scale might be possible, but it requires some things:
a) Smallness
I think you have misunderstood the term "moderation at scale". It means moderating vast amounts of content, on the scale of a large social media platform.
The recommendations system, and the display of suggested titles, is speech.
This is a little off topic, but it got me thinking about speech protections. Recommendations such as those by Netflix, YouTube, and Facebook are speech by the platform itself. As such, they are protected by the 1st Amendment, but not by Section 230 - correct? Have there not been a bunch of lawsuits specifically targeting the recommendations, knowing that they cannot be quickly dismissed via 230?
Congratulations on raising a kid with so little self-worth; it's all on you.
You sound like someone who has never dealt with a family member suffering from depression. I hope that you never have to, and that you develop some empathy for those who do at some point, hopefully soon.
ask if "Congress has been putting unconstitutional pressure to" blah, blah, blah "Fox News"? The answer is no.
I think AT&T executives are better at reading between the lines than you are. For example: "Are you planning to continue carrying Fox News, Newsmax, and OANN on U-verse, DirecTV, and AT&T TV both now and beyond any contract renewal date? If so, why? "
Well, the bully's parents raised their kid to be a bully. It's no surprise that one or both of the parents behaves like a bully, too--demanding police involvement.
It wasn't even the bully's parents demanding police involvement.
Yes, 230 does not provide a shield in the case of criminal law violations. You could make the case that it should, but that isn't what most people are focusing on when it comes to content moderation.
On the post: As Prudes Drive Social Media Takedowns, Museums Embrace... OnlyFans?
Re: Re: Re: Re: Re: Re: Re: Here they go with their principles a
That would be fine (or at least less bad) if all the racism were confined to the racism area, and so on. But it isn't.
Nobody is suggesting revoking either of those.
On the post: Everything You Know About Section 230 Is Wrong (But Why?)
Re: Re: Re: Re: Re: Re: Re: Re: Replace?
https://www.technologyreview.com/2021/02/08/1017625/safe-tech-section-230-democrat-reform/
ht tps://eshoo.house.gov/media/press-releases/reps-eshoo-and-malinowski-introduce-bill-hold-tech-platfo rms-liable-algorithmic
https://www.reuters.com/article/us-usa-tech-liability/democrats-prefer-scalpe l-over-jackhammer-to-reform-key-u-s-internet-law-idUSKBN27E1IA
https://www.vox.com/recode/22221135/c apitol-riot-section-230-twitter-hawley-democrats
Democrats have been frequently calling for section 230 "reform", generally as Lostinlodos says with the aim to get companies to moderate more heavily.
On the post: Surprising, But Important: Facebook Sorta Shuts Down Its Face Recognition System
Re:
And even if they "deleted" it, to actually get rid of it they would have to go into the source control and remove it (considered bad practice and only to be done in extreme cases), and also delete it from all their backups (probably easier said than done).
On the post: The Scale Of Content Moderation Is Unfathomable
Re: Re: Re: Re: Re:
That still understates the problem. Even if every advertiser were in complete agreement about what they wanted to appear on a platform, and not appear, it would still not be possible to moderate a platform like Facebook perfectly.
On the post: The Internet Is Not Facebook; Regulating It As If It Were Will Fuck Things Up
Re: Re: Re: Re: Re:
Did you not read the comment you replied to?
"people are tired of lies being given equal weight in a time when we're seeing the negative effects of not taking action decades ago as a direct result of the lies."
There are two points here:
1) Giving the government the power to decide what is true and what is false, and disallow false speech, is quite dangerous
AND
2) Non-government actors, such as media outlets, have no obligation to be neutral on questions of fact, and do harm by pretending baseless lies are on an equal footing with scientifically sound findings
On the post: The Internet Is Not Facebook; Regulating It As If It Were Will Fuck Things Up
Re: Re: Re:
It also doesn't say "ignorant dipshits must be given equal time with qualified experts", so what is your point?
On the post: The Scale Of Content Moderation Is Unfathomable
Re: Re: Re: Re:
This implies that the only thing missing that is needed to solve the problem is motivation, which is not correct.
On the post: The Scale Of Content Moderation Is Unfathomable
Re: Re: Re:
What do you mean by a subscription model? If it's just revenue from subscriptions instead of ads, I don't see how that solves the moderation issue, unless it's by drastically reducing the number of users. Which kind of misses the point of the issue of moderation at scale.
On the post: Forget 'The Kids These Days'; It's The Adults And Their Moral Panics To Worry About
Fitting in
It's not a fitting in thing, it's just because everyone else is doing it, and I want them to like me, and to show off for my friends.
Huh?? If that's not trying to fit in, what is?
On the post: Forget 'The Kids These Days'; It's The Adults And Their Moral Panics To Worry About
Re: Re: Always has been
Regarding millennials vs gen Y: https://www.youtube.com/watch?v=15iLHlJPp_0
On the post: California Prosecutors Are Still Trying To Get Signal To Hand Over User Info It Simply Doesn't Possess
Re:
Maybe they thought Signal was lying, because they think all services collect all the information they can on their users.
On the post: The Scale Of Content Moderation Is Unfathomable
Re: Re: Content moderation at scale...
I think you have misunderstood the term "moderation at scale". It means moderating vast amounts of content, on the scale of a large social media platform.
On the post: Netflix Files Anti-Slapp Motion To Dismiss Lawsuit Claiming One Of Its Series Caused A Teen To Commit Suicide
Speech
This is a little off topic, but it got me thinking about speech protections. Recommendations such as those by Netflix, YouTube, and Facebook are speech by the platform itself. As such, they are protected by the 1st Amendment, but not by Section 230 - correct? Have there not been a bunch of lawsuits specifically targeting the recommendations, knowing that they cannot be quickly dismissed via 230?
On the post: Netflix Files Anti-Slapp Motion To Dismiss Lawsuit Claiming One Of Its Series Caused A Teen To Commit Suicide
Re:
You sound like someone who has never dealt with a family member suffering from depression. I hope that you never have to, and that you develop some empathy for those who do at some point, hopefully soon.
On the post: Latest Moral Panic: No, TikTok Probably Isn't Giving Teenage Girls Tourette Syndrome
Re: Re: Satanic panic all over again.
I know it was a silly joke, but that is generally not how Tourette's works.
https://www.cdc.gov/ncbddd/tourette/facts.html
On the post: The Scale Of Content Moderation Is Unfathomable
Re:
How do you suppose spending more money would solve the problem?
On the post: The Internet Is Not Facebook; Regulating It As If It Were Will Fuck Things Up
Re: Re: Re:
I think AT&T executives are better at reading between the lines than you are. For example: "Are you planning to continue carrying Fox News, Newsmax, and OANN on U-verse, DirecTV, and AT&T TV both now and beyond any contract renewal date? If so, why? "
On the post: Everything You Know About Section 230 Is Wrong (But Why?)
Re: Re: Re: Replace?
I think it's pretty clearly written as is. What do you find unclear or inaccessible?
On the post: Hawaii School, Police Department On The Verge Of Being Sued For Arresting A Ten-Year-Old Girl Over A Drawing
Re:
It wasn't even the bully's parents demanding police involvement.
On the post: Everything You Know About Section 230 Is Wrong (But Why?)
Re: Re: Re: Re: Re: Replace?
Yes, 230 does not provide a shield in the case of criminal law violations. You could make the case that it should, but that isn't what most people are focusing on when it comes to content moderation.
Next >>