from the disinformation-nation dept
While California's new net neutrality law grabbed the lion's share of press headlines, the state last week also passed legislation attempting to fix an equally complicated problem: bots. The rise of bots has played a major role in not only helping companies and politicians covertly advertise products or positions, but they've also played a starring role in Russia's disinformation efforts. That in turn has fueled what's often not-entirely-productive paranoia, as users online accuse those they disagree with of being bots, instead of, say, just genuinely terrible but real human beings.
Last Sunday, California Governor Jerry Brown signed SB1001, a bill that requires companies or individuals to label any bots clearly as bots. The bill explains the legislation this way:
"This bill would, with certain exceptions, make it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The bill would define various terms for these purposes. The bill would make these provisions operative on July 1, 2019."
While this is well intentioned, you can see how enforcement could prove to be a bit of a problem. Most of the scientific analysis of what constitutes a bot hasn't been particularly solid. Hamilton 68, for example, widely cited in the media as a definitive and scientific way to measure Russian bot influence online, has been widely derided as little more than educated guesswork at best.
That's not to say that "bots" aren't a problem. A disclosure from Twitter back in June noted that the company had "identified and challenged" more than 9.9 million accounts each week last May for being "potentially spammy or automated accounts." Those numbers were up dramatically from the 6.4 million per week in December 2017, and the 3.2 million million per week in September. Initially, fear of hampering ad impression metrics likely hindered many companies' interest in addressing this problem, but the exposure of Russia's online gamesmanship seems to have shifted that thinking dramatically.
California's efforts on this front were born from genuine annoyance at the problem. State Senator Robert M. Hertzberg, the bill's author, ran into the problem face first when numerous fake Facebook and Twitter accounts began berating him for a bail reform bill he proposed earlier this year. In a report in the New York Times last July, he found himself inundated with bogus attacks by bots, who did everything they could do to scuttle his proposal (note how he didn't clarify how he differentiated between bots or just terrible people).
I spoke briefly about California's new legislation with the EFF's Jeremy Gillula, who was quick to highlight the enforcement and free speech problems inherent in California's attempt to fix a problem we don't fully understand:
"Enforcement is definitely going to be difficult. Governments aren't in a good position to be able to tell whether or not an account on social media is a bot or not. And we wouldn't want them to be--that would require that they know or are able to easily find out where, when, and how every post to social media was made. That would destroy any possibility of anonymous speech online."
Gillula told me that the original bill proposed by Hertzberg was much worse, in that it would have made chatbots like Olivia Taters illegal simply because it's not clearly labeled as a bot. The EFF also successfully encouraged Hertzberg's office to eliminate a proposal that would have forced all websites to have functions letting users report bots. Again, because of the problems stated above (a culture of paranoia and difficulty differentiating bots from ordinary human jackasses), the end result could have been potentially catastrophic for speech online:
"We've seen this sort of takedown/labeling regime abused time and again in the past. For example, Twitter and Facebook users can already report posts they think violate the platforms’ community standards. Online platforms also have to deal with complaints under the Digital Millennium Copyright Act, or DMCA, a federal law meant to protect copyright online, which forces platforms to take down content. In both cases, malicious users have figured out how to abuse these reporting systems so that they can get speech they don’t like erased from the Internet. The targets victims of such abuse have been Muslim civil rights leaders, pro-democracy activists in Vietnam, pro-science accounts targeted by anti-vaccination groups, and Black Lives Matter activists, whose posts have been censored due to efforts by white supremacists."
One one hand, you can see how bringing a little more transparency to who is running bots and what their motivations are could be helpful. But actually identifying bots and removing them is currently more art than science. In the interim, it's probably a good idea to avoid creating solutions that, in time, could create even bigger problems (like an entirely new takedown system open to abuse) than the one you're trying to fix.
Filed Under: bots, california, labelling, misrepresentation