California Legislators Now Get Into The Pointless & Likely Counterproductive Content Moderation Legislating Business
from the bad-ideas-with-good-intent dept
Another day, another state house deciding that it needs to jump into the business of content moderation. This time it's California, and this bill (1) is not nearly as insane as many other states and (2) appears to be coming from well meaning people with good intentions. It doesn't make it a good bill, however. It was announced this week in a somewhat odd press release from Assembly Majority Whip Jesse Gabriel, who declares it to be "groundbreaking" as well as a "bipartisan effort to hold social media companies accountable for online hate and disinformation."
Needless to say, the bill is neither groundbreaking, nor would it do much of anything to hold social media companies accountable for online hate and disinformation. Also, bizarrely, the press release does not link to the bill. That's just dumb. However, I will link to it, even though I'm not any of the elected officials supposedly pushing this bill that they do not want to link to. And then if you look at the bill, you can see it was actually introduced... back in early February, so it's not clear why they waited until now to do the press release.
The press release makes a lot of blustery claims that the bill cannot live up to (perhaps why they didn't link). Also, there's a key part in all of this that goes unstated: whether we like it or not, everything that the press release and this bill are complaining about -- hate speech, disinformation, extremism, and even a lot of harassment -- are still protected under the 1st Amendment. So, realistically there is not much that any bill on those topics can do without running afoul of the 1st Amendment. To be clear, this is not saying that any of those things are good or should be hosted on mainstream websites. Nor is it saying that the big social media companies shouldn't be constantly improving their moderation practices to deal with those things. It's just noting the reality of the 1st Amendment, and how this bill appears to mainly be upset about those 1st Amendment realities.
As for the actual bill, it is pretty limited. It only applies to "social media companies" that have over $100 million in revenue in the previous year:
(1) “Social media company” means a person or entity that owns or operates a public-facing internet-based service that generated at least one hundred million dollars ($100,000,000) in gross revenue during the preceding calendar year, and that allows users in the state to do all of the following:
(A) Construct a public or semipublic profile within a bounded system created by the service.
(B) Populate a list of other users with whom an individual shares a connection within the system.
(C) View and navigate a list of the individual’s connections and the connections made by other individuals within the system.(2) “Social media company” does not include a person or entity that exclusively owns and operates an electronic mail service
So... uh... this covers Facebook/Instagram, Twitter... Pinterest, TikTok... and maybe Snap? I guess LinkedIn as well? I don't even think it would cover YouTube since I'm not sure if YouTube lets you "view and navigate a list of connections" within the system (or if it does, I've never seen it). I don't think Reddit would be covered for the same reason.
And what would it require of these companies? Transparency reports. Which most of these companies already do. The requirements would probably force them to make some more changes to the transparency reports they issue to focus more narrowly on the topics the bill seeks to "deal with" but not in a meaningful way. Twice a year, the company will have to submit to California's Attorney General "a terms of service report," which will include the current terms of service (the AG can't download a copy directly?!?), a list of any changes to the terms, and a description of "how the current version of the terms of service defines" a variety of things: hate speech, racism, extremism or radicalization, disinformation, misinformation, harassment, and foreign political interference.
It will also require that the companies hand over rules or guidelines given to staff to handle that type of content, and any training material those staff are given.
Then, every quarter, companies will have to tell the AG how many bits of content have been "flagged" and how many bits of content have been "actioned," along with the number of times an "actioned item" was "viewed" or "shared." There's a lot more detail in there, but it's all just asking for numbers on how content was flagged, how the company dealt with it, how many people saw the content.
And again, for the most part, companies already do this. Here, for example, is Facebook's explanation of how much content it removed for harassment and bullying and for hate speech. Twitter's transparency report provides similar info. For example:
It's not all of the info that this law would require, but the basic info is all public and has been for a long time. And how is this possibly useful to the California Attorney General? The AG cannot take action against these companies for failing to take down content they dislike. That would violate the 1st Amendment. The only thing the AG can do is take action against these companies for failing to file such a report (and, arguably, Section 230 might even pre-empt this law and make it unenforceable anyway).
All of this, again, seems to be premised on the false belief that these large social media companies don't care and aren't doing anything to deal with misinformation, hate, etc. on their platforms. And that's just wrong. Each of the companies has tremendous incentive to keep their platforms clean of that stuff because it drives away users and advertisers.
Even worse, it's possible that this kind of bill could easily backfire and do much more damage to the very people the bill's supporters suggest it's designed to protect. As we've discussed many times before, "transparency" regarding moderation sounds great in theory, but is very thorny in practice. Especially when dealing with bad actors. Trolls love to game the system, and the more transparency that is given around moderation standards and practices, the more they are likely to toe the line and/or cry foul when their obviously trollish behavior is "actioned." There doesn't appear to be that much required in this bill that would help adversaries, but time and time again I am amazed at how far adversaries are willing to go to twist things to their advantage.
Also, like so many other bills about rapidly changing technology, this bill seems to assume that certain things will always remain as is. That is, it assumes a world in which "bad" content is "flagged" and then "actioned" by the company in some way. But, imagine there were a system where users were given more control over their own parts of a social media ecosystem -- and could make use of different algorithms. This is the world that Twitter claims it's moving towards, but then how would it handle demands like the ones in this bill, when "flagged" and "actioned" would likely mean very different things. Or, what if a social media system that worked more like Wikipedia or Reddit appeared -- in which the community itself handled the moderation. How would that platform comply with this law?
Finally, because this is a state law, and other states are considering similar bills, it could create a real compliance mess if every state requires different information and different reporting in different formats. This really isn't the kind of thing the state should be regulating in the first place.
In the end, all this kind of bill would do is create a compliance headache for these companies, and do very little to deal with the actual realities on the ground of content moderation. It may make politicians in Sacramento feel good so they can put out silly press releases about how they're "doing something," but it's generally performative nonsense that won't make any real change, nor have any real impact.
Filed Under: california, content moderation, disinformation, hate speech, jesse gabriel, section 230, state laws, transparency