California Legislators Now Get Into The Pointless & Likely Counterproductive Content Moderation Legislating Business
from the bad-ideas-with-good-intent dept
Another day, another state house deciding that it needs to jump into the business of content moderation. This time it's California, and this bill (1) is not nearly as insane as many other states and (2) appears to be coming from well meaning people with good intentions. It doesn't make it a good bill, however. It was announced this week in a somewhat odd press release from Assembly Majority Whip Jesse Gabriel, who declares it to be "groundbreaking" as well as a "bipartisan effort to hold social media companies accountable for online hate and disinformation."
Needless to say, the bill is neither groundbreaking, nor would it do much of anything to hold social media companies accountable for online hate and disinformation. Also, bizarrely, the press release does not link to the bill. That's just dumb. However, I will link to it, even though I'm not any of the elected officials supposedly pushing this bill that they do not want to link to. And then if you look at the bill, you can see it was actually introduced... back in early February, so it's not clear why they waited until now to do the press release.
The press release makes a lot of blustery claims that the bill cannot live up to (perhaps why they didn't link). Also, there's a key part in all of this that goes unstated: whether we like it or not, everything that the press release and this bill are complaining about -- hate speech, disinformation, extremism, and even a lot of harassment -- are still protected under the 1st Amendment. So, realistically there is not much that any bill on those topics can do without running afoul of the 1st Amendment. To be clear, this is not saying that any of those things are good or should be hosted on mainstream websites. Nor is it saying that the big social media companies shouldn't be constantly improving their moderation practices to deal with those things. It's just noting the reality of the 1st Amendment, and how this bill appears to mainly be upset about those 1st Amendment realities.
As for the actual bill, it is pretty limited. It only applies to "social media companies" that have over $100 million in revenue in the previous year:
(1) “Social media company” means a person or entity that owns or operates a public-facing internet-based service that generated at least one hundred million dollars ($100,000,000) in gross revenue during the preceding calendar year, and that allows users in the state to do all of the following:
(A) Construct a public or semipublic profile within a bounded system created by the service.
(B) Populate a list of other users with whom an individual shares a connection within the system.
(C) View and navigate a list of the individual’s connections and the connections made by other individuals within the system.(2) “Social media company” does not include a person or entity that exclusively owns and operates an electronic mail service
So... uh... this covers Facebook/Instagram, Twitter... Pinterest, TikTok... and maybe Snap? I guess LinkedIn as well? I don't even think it would cover YouTube since I'm not sure if YouTube lets you "view and navigate a list of connections" within the system (or if it does, I've never seen it). I don't think Reddit would be covered for the same reason.
And what would it require of these companies? Transparency reports. Which most of these companies already do. The requirements would probably force them to make some more changes to the transparency reports they issue to focus more narrowly on the topics the bill seeks to "deal with" but not in a meaningful way. Twice a year, the company will have to submit to California's Attorney General "a terms of service report," which will include the current terms of service (the AG can't download a copy directly?!?), a list of any changes to the terms, and a description of "how the current version of the terms of service defines" a variety of things: hate speech, racism, extremism or radicalization, disinformation, misinformation, harassment, and foreign political interference.
It will also require that the companies hand over rules or guidelines given to staff to handle that type of content, and any training material those staff are given.
Then, every quarter, companies will have to tell the AG how many bits of content have been "flagged" and how many bits of content have been "actioned," along with the number of times an "actioned item" was "viewed" or "shared." There's a lot more detail in there, but it's all just asking for numbers on how content was flagged, how the company dealt with it, how many people saw the content.
And again, for the most part, companies already do this. Here, for example, is Facebook's explanation of how much content it removed for harassment and bullying and for hate speech. Twitter's transparency report provides similar info. For example:
It's not all of the info that this law would require, but the basic info is all public and has been for a long time. And how is this possibly useful to the California Attorney General? The AG cannot take action against these companies for failing to take down content they dislike. That would violate the 1st Amendment. The only thing the AG can do is take action against these companies for failing to file such a report (and, arguably, Section 230 might even pre-empt this law and make it unenforceable anyway).
All of this, again, seems to be premised on the false belief that these large social media companies don't care and aren't doing anything to deal with misinformation, hate, etc. on their platforms. And that's just wrong. Each of the companies has tremendous incentive to keep their platforms clean of that stuff because it drives away users and advertisers.
Even worse, it's possible that this kind of bill could easily backfire and do much more damage to the very people the bill's supporters suggest it's designed to protect. As we've discussed many times before, "transparency" regarding moderation sounds great in theory, but is very thorny in practice. Especially when dealing with bad actors. Trolls love to game the system, and the more transparency that is given around moderation standards and practices, the more they are likely to toe the line and/or cry foul when their obviously trollish behavior is "actioned." There doesn't appear to be that much required in this bill that would help adversaries, but time and time again I am amazed at how far adversaries are willing to go to twist things to their advantage.
Also, like so many other bills about rapidly changing technology, this bill seems to assume that certain things will always remain as is. That is, it assumes a world in which "bad" content is "flagged" and then "actioned" by the company in some way. But, imagine there were a system where users were given more control over their own parts of a social media ecosystem -- and could make use of different algorithms. This is the world that Twitter claims it's moving towards, but then how would it handle demands like the ones in this bill, when "flagged" and "actioned" would likely mean very different things. Or, what if a social media system that worked more like Wikipedia or Reddit appeared -- in which the community itself handled the moderation. How would that platform comply with this law?
Finally, because this is a state law, and other states are considering similar bills, it could create a real compliance mess if every state requires different information and different reporting in different formats. This really isn't the kind of thing the state should be regulating in the first place.
In the end, all this kind of bill would do is create a compliance headache for these companies, and do very little to deal with the actual realities on the ground of content moderation. It may make politicians in Sacramento feel good so they can put out silly press releases about how they're "doing something," but it's generally performative nonsense that won't make any real change, nor have any real impact.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: california, content moderation, disinformation, hate speech, jesse gabriel, section 230, state laws, transparency
Reader Comments
Subscribe: RSS
View by: Time | Thread
'Look at us Doing Something!'
Nothing like politicians wasting time and taxpayer dollars on useless bills just so they can crow to the gullible that they're Doing Something.
While it's certainly better than the malicious politicians who are trying to screw things up and play to the gullible this is still an example of politicians putting sound bites before work, wasting time on PR stunts when there's worthwhile things to do being ignored because it's not as flashy, and they absolutely deserve to be called on that.
[ link to this | view in chronology ]
Re: 'Look at us Doing Something!'
Not only is this shtick performative, it probably also lets them avoid meaningful, non-sexy legislation. "We were too busy doing this other super-important thing!"
[ link to this | view in chronology ]
Re: Re: 'Look at us Doing Something!'
See we solved this huge problem!!!!!
Now ignore that we still have people shitting on the streets, we refuse to fund assistance to the poor & homeless, our infrastructure is a joke, we still have no idea how to deal with our water crisis...
But hey we put warning labels on all sorts of things so you could be informed & be safer!!!
"hate speech, racism, extremism or radicalization, disinformation, misinformation, harassment, and foreign political interference. "
So they are doing what to Fox News?
Disinformation?
You mean like when I called a bunch of lawyers extortionists, cited my sources, showed my work on a separate page but people claimed I had to be lying?
Harassment?
Still waiting for the IHG I helped found to be added to SPLC list.
They claimed I did all sorts of bad things, doubling down putting them in court filings, yet they were lying about being harassed.
Hate Speech?
In a land of 'micro-agressions' how can anyone give an all encompassing definition? I mean I guess Nazis screaming hitler did nothing wrong is really easy to point to, but there are people who want to make sure no one can say they think BTS is overrated.
Extremism or radicalization?
Does joining the GOP qualify?
Foreign political interference?
What about the home grown political interference? We have elected officials on nearly every level in this nation telling lies & misrepresenting things to keep political control.
Racism?
While some of it is clearly obvious & easily identified, humans love to stretch the definitions until they become meaningless.
Is it racism if you aren't attracted to different races?
I've been branded a racist for not being attracted to someone of a different race, where is the line between personal preferences & burning the cross in someones yard?
Performative bullshit is pointless.
We'll know it when we see it!!
But we can't even agree on 10 rulings in sample cases, but you better big tech b/c we said so.
[ link to this | view in chronology ]
Re: Re: Re: 'Look at us Doing Something!'
I think you spelled “CNN” wrong.
[ link to this | view in chronology ]
Re: Re: Re: Re: 'Look at us Doing Something!'
Found the Fox News viewer.
[ link to this | view in chronology ]
Re: Re: Re: 'Look at us Doing Something!'
To be fair, that warning label thing was a voter initiative. The legislature didn't have much to do with it.
[ link to this | view in chronology ]
Re: Re: Re: 'Look at us Doing Something!'
"Is it racism if you aren't attracted to different races?"
Or people in a given age bracket, people of a given gender, etc. An unfortunate proof of Karl Popper's paradox of tolerance is that the demand for infinite tolerance abolishes it.
I.e. you can not tolerate those who will not extend similar tolerance in reciprocity. You can not tolerate behavior which closes options for others. And needless to say if someone has gone off the deep end to demand you need to be sexually attracted to everyone equally then that person just called for the abolition of LGBTQ as well.
There's a difference between personal choice and tolerance of systematic bigotry. It's bad enough when racists, misogynists and the white power brigade conflates these principles...and utterly horrible when the advocates of liberalism manage to back themselves right over the edge this way.
[ link to this | view in chronology ]
Re: 'Look at us Doing Something!'
I don't even see a meaningful difference between the two modes of operation even - both are screwing us over for personal benefit to their careers. Both sorts are assholes who should be thrown out of office as soon as possible but sadly probably won't be.
[ link to this | view in chronology ]
Youtubers have the channels tab, which allows them to show channels that they subscribe to. That may well qualify as "view and navigate a list of connection".
[ link to this | view in chronology ]
Because being mean just can't be allowed to be legal in this day and age!
[ link to this | view in chronology ]
It's a pointless bill but at least it's limited to large websites
That can afford moderators
It does not solve any problems for anyone
If every state start s passing bills about moderation it will simply add red tape and costs to websites and not all websites can afford to pay moderators
Some forums are moderated by users and users can report illegal or abusive content
In strange in a pandemic so many politicians are concerned
about how websites block users or moderate content
and they don't understand how forums work and how services
have the right to block or remove content
even if it's legal
The problem is some small websites will close down forums
And block all user uploads simply because they can't afford to hire full time mods
And part of allowing free speech is sometimes to allow opinions which may be rude or negative to some extent
depending on the context
[ link to this | view in chronology ]
Which is better than most of these bills.
[ link to this | view in chronology ]
Yes, but there is also the other possibility. The service could
Even if there are California users certifying falsely that they are out of state, California jurisdiction over the service is going to be somewhat problematic. It is not International Shoe [v. State of Washington, 326 U.S. 310 (1945)], where sales reps enter the state to set up transactions. And a web page, without more, will not meet that standard in the US 9th Circuit. Cybersell v. Cybersell, 130 F.3d 414 (US 9th Cir. 1997).
[ link to this | view in chronology ]
Re:
That's an expensive proposition if they're currently in California. And then they would have to hope (/lobby) that whatever state they move to doesn't end up doing the same thing.
[ link to this | view in chronology ]
Re: Re:
Yes,but so is being regulated and fined out of business.
A couple of prominent move-outs will surely send a message. This is a cheaper message than a couple of prominent shut-downs due to regulatory death by a thousand paper cuts.
[ link to this | view in chronology ]
Re: Re: Re:
True, but then it becomes a prisoner's dilemma. Hopefully this thing doesn't end up passing.
[ link to this | view in chronology ]