California's War On 'Bots' Could Be A Steep Uphill Climb

from the disinformation-nation dept

While California's new net neutrality law grabbed the lion's share of press headlines, the state last week also passed legislation attempting to fix an equally complicated problem: bots. The rise of bots has played a major role in not only helping companies and politicians covertly advertise products or positions, but they've also played a starring role in Russia's disinformation efforts. That in turn has fueled what's often not-entirely-productive paranoia, as users online accuse those they disagree with of being bots, instead of, say, just genuinely terrible but real human beings.

Last Sunday, California Governor Jerry Brown signed SB1001, a bill that requires companies or individuals to label any bots clearly as bots. The bill explains the legislation this way:

"This bill would, with certain exceptions, make it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The bill would define various terms for these purposes. The bill would make these provisions operative on July 1, 2019."

While this is well intentioned, you can see how enforcement could prove to be a bit of a problem. Most of the scientific analysis of what constitutes a bot hasn't been particularly solid. Hamilton 68, for example, widely cited in the media as a definitive and scientific way to measure Russian bot influence online, has been widely derided as little more than educated guesswork at best.

That's not to say that "bots" aren't a problem. A disclosure from Twitter back in June noted that the company had "identified and challenged" more than 9.9 million accounts each week last May for being "potentially spammy or automated accounts." Those numbers were up dramatically from the 6.4 million per week in December 2017, and the 3.2 million million per week in September. Initially, fear of hampering ad impression metrics likely hindered many companies' interest in addressing this problem, but the exposure of Russia's online gamesmanship seems to have shifted that thinking dramatically.

California's efforts on this front were born from genuine annoyance at the problem. State Senator Robert M. Hertzberg, the bill's author, ran into the problem face first when numerous fake Facebook and Twitter accounts began berating him for a bail reform bill he proposed earlier this year. In a report in the New York Times last July, he found himself inundated with bogus attacks by bots, who did everything they could do to scuttle his proposal (note how he didn't clarify how he differentiated between bots or just terrible people).

I spoke briefly about California's new legislation with the EFF's Jeremy Gillula, who was quick to highlight the enforcement and free speech problems inherent in California's attempt to fix a problem we don't fully understand:

"Enforcement is definitely going to be difficult. Governments aren't in a good position to be able to tell whether or not an account on social media is a bot or not. And we wouldn't want them to be--that would require that they know or are able to easily find out where, when, and how every post to social media was made. That would destroy any possibility of anonymous speech online."

Gillula told me that the original bill proposed by Hertzberg was much worse, in that it would have made chatbots like Olivia Taters illegal simply because it's not clearly labeled as a bot. The EFF also successfully encouraged Hertzberg's office to eliminate a proposal that would have forced all websites to have functions letting users report bots. Again, because of the problems stated above (a culture of paranoia and difficulty differentiating bots from ordinary human jackasses), the end result could have been potentially catastrophic for speech online:

"We've seen this sort of takedown/labeling regime abused time and again in the past. For example, Twitter and Facebook users can already report posts they think violate the platforms’ community standards. Online platforms also have to deal with complaints under the Digital Millennium Copyright Act, or DMCA, a federal law meant to protect copyright online, which forces platforms to take down content. In both cases, malicious users have figured out how to abuse these reporting systems so that they can get speech they don’t like erased from the Internet. The targets victims of such abuse have been Muslim civil rights leaders, pro-democracy activists in Vietnam, pro-science accounts targeted by anti-vaccination groups, and Black Lives Matter activists, whose posts have been censored due to efforts by white supremacists."

One one hand, you can see how bringing a little more transparency to who is running bots and what their motivations are could be helpful. But actually identifying bots and removing them is currently more art than science. In the interim, it's probably a good idea to avoid creating solutions that, in time, could create even bigger problems (like an entirely new takedown system open to abuse) than the one you're trying to fix.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: bots, california, labelling, misrepresentation


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    TheResidentSkeptic (profile), 9 Oct 2018 @ 12:09pm

    The odds are...

    ...about equal with stopping robo-calls.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 9 Oct 2018 @ 12:21pm

    will we be able to ban people who act like bots?

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 9 Oct 2018 @ 12:26pm

    Obligatory

    This is why Skynet won!

    link to this | view in thread ]

  4. icon
    Uriel-238 (profile), 9 Oct 2018 @ 1:01pm

    People who act like bots.

    Thanks for calling Central Services. I'm sorry. Due to staff shortages, Central Services cannot take service calls between 2300 and 0900 hours.Have a nice day. This has not been a recording.

    link to this | view in thread ]

  5. identicon
    tobor the magnificent, 9 Oct 2018 @ 1:03pm

    What is wrong with identifying a bot as being a bot?

    Apparently the all seeing benevolent overlords want everyone to properly identify themselves on the web, why should bots be any different?

    link to this | view in thread ]

  6. icon
    Zof (profile), 9 Oct 2018 @ 1:17pm

    Should be easy

    If something is pretending to be a real person but isn't, it's a bot. It should have a "bot" label of some kind.

    If a bot is discovered faking it, the account is deleted.

    I love the idea. Let's do it.

    link to this | view in thread ]

  7. identicon
    Pixelation, 9 Oct 2018 @ 1:17pm

    This explains a lot

    Our president is a bot.

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 9 Oct 2018 @ 1:45pm

    Re: Should be easy

    Is it that easy when some people run a bot on their personal twitter etc. accounts to push notifications of when they publish something on YouTube, or their blog.

    This sort of law reeks of politicians panicking because they can no longer control public debate, and will have all sorts of unintended consequences that fly under the radar because it does small amounts of damage to an individual but a large amount of damage to society when all those little bits are added up. Also there is the risk of this being abused by those who would stamp out opinions and activities that they disagree with.

    link to this | view in thread ]

  9. icon
    Zof (profile), 9 Oct 2018 @ 1:55pm

    Re: Re: Should be easy

    I run AWS processes against my twitter account with a VPN API. The difference is they aren't logged in. I have a simple shadowban detector that let's me know when a post gets shadowbanned despite twitter "not doing that" according to zealots. Clearly that's not a bot.

    Further, you'd assume even if your elaborate "bot" that does twitter things for you is very active, it's not posting for you is it? Then it would be pretending to be a human.

    I'm not sure what's hard to understand here. You aren't going to be able to fabricate a gray area here. It's very cut and dry.

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 9 Oct 2018 @ 1:56pm

    Re:

    If the overlords were truly all seeing and benevolent, shouldn't they already know who is a bot without requiring explicit identification?

    link to this | view in thread ]

  11. icon
    James Burkhardt (profile), 9 Oct 2018 @ 2:14pm

    Re: Re: Re: Should be easy

    Strangely, you seem to invert my views of a bot entirely.

    You claim that an automated process that sends out notifications on my social media accounts in response to my own actions (new blog/podcast post, for instance) should require my account be labeled a bot because despite posting the truthful information I want posted but otherwise would need to manually fat finger in and the accounts being otherwise used for content I personally produce, all because I automate non social media content notifications?

    Its true that the process is 'pretending to be me', but that's because I tell it to post these things. And it is otherwise me posting. Are you somehow harmed by the knowledge that my blog post tweet is somehow artificial and not homegrown link copy-pasting?

    Your bot example is clearly a bot, just not a content bot. Since it doesn't post anything, it clearly wouldn't apply to this situation at all, but that adds nothing to your case. That complicated automated process is still a 'bot'.

    link to this | view in thread ]

  12. identicon
    Christenson, 9 Oct 2018 @ 2:20pm

    Re: Re: Re: Should be easy

    Zof:
    It seems to me *you* are a bot...posting nonsense, missing the point of the article, claiming "it's easy".

    In your reply, you will be screaming to the contrary...just like a bot. You probably post from a VPN address...just like a bot.

    Hint: It can be very hard to identify bots, especially the malicious ones, and it will become an on-going cat-and-mouse game. And that's *before* the lines get blurred, like when I write my Farcebook posts with a tweet at the top, and that gets auto-tweeted. That human or a bot there??? Or are you one of 30 sockpuppets of just one GRU agent??

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 9 Oct 2018 @ 2:25pm

    Re: Re: Re: Should be easy

    What is elaborate about a an automation scrip that when an upload to YouTube is completed posts Twitter etc. notifications that a new video is available. That is the sort of thing that is reasonable to automate on a personal stream, along with manual news posts etc.

    Also, many an IRC channel runs a bot that catches and responds to FAQ's. What about people using NightBot on YouTube and twitch chat on their channels, is it name enough identification.

    Like many things, the benign uses fly under the political radar, and are never considered when a bad, according to politicians, use surfaces.

    This issue is nowhere near all black as you seem to think it is, but includes all shade from white to black.

    link to this | view in thread ]

  14. identicon
    Anonymous Cowherd, 9 Oct 2018 @ 3:28pm

    Why should bots be differentiated from "ordinary human jackasses" in the first place? A jackass is a jackass is a jackass. What posting mechanics they use, manual or automatic, doesn't change that.

    Just another attempt to suppress speech someone doesn't like.

    link to this | view in thread ]

  15. icon
    Gary (profile), 9 Oct 2018 @ 4:03pm

    Re: Should be easy

    If it's easy, then you must have solved the Touring problem and have created a simple test to tell bots from humans.
    Maybe a button for reporting suspected bots to the proper authorities?

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 9 Oct 2018 @ 4:19pm

    Re: Re: Should be easy

    A touring machine often has GT in its name, an makes touring easy. The Turing problem on the other hand cannot be solved by a Turing machine.:-)

    link to this | view in thread ]

  17. icon
    Gary (profile), 9 Oct 2018 @ 5:10pm

    Re: Re: Re: Should be easy

    It's a bot, get em! :)

    My phone is part of the conspiracy, obviously it wanted to change Turing to Touring.

    He - my phone actually IS a bot. Siri and Autocorrect are both bots according to this California law!

    link to this | view in thread ]

  18. icon
    Toom1275 (profile), 9 Oct 2018 @ 6:09pm

    Inb4 Ajit Pai & Co sue CA over this law too. Otherwise, it'll be a little tougher for him to get as much "public support" for his future actions.

    link to this | view in thread ]

  19. identicon
    Anonymous Coward, 9 Oct 2018 @ 7:07pm

    Re: Re: Re: Re: Should be easy

    idk that autocorrupt is a bot, but there certainly ought to be a law against it.

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 9 Oct 2018 @ 7:39pm

    Re: This explains a lot

    Interesting idea.

    If he actually was replaced with a bot, one programmed to endlessly spew out random, incoherent and nonsensical sentence-fragments copy/pasted from alt-right Twitter accounts, how would we tell the difference? He more or less does that already. If anything, he'd presumably come across as slightly more intelligible and less dangerously chaotic than usual.

    More pertinently, if some enterprising, shadowy party were to do this in real life, who exactly would it inconvenience? His removal and replacement would hardly seem to constitute a catastrophic loss for world democracy.

    The more I think about it, the more this sounds like a plan.
    I wonder if we can get this crowdfunded?

    --dg100

    link to this | view in thread ]

  21. identicon
    Anonymous Coward, 10 Oct 2018 @ 2:27am

    Re: This explains a lot

    or a lizard person

    link to this | view in thread ]

  22. identicon
    Anonymous Coward, 10 Oct 2018 @ 2:29am

    Re:

    Agreed, I ignore human jackasses so why not bots too

    link to this | view in thread ]

  23. identicon
    Anonymous Coward, 10 Oct 2018 @ 6:21am

    Re: Re: This explains a lot

    or a lizard person

    I happen to know a few lizardfolk who would make vastly better Presidents than the current inhabitant of the Oval Office, so I think your comparison falls flat.

    link to this | view in thread ]

  24. icon
    Bergman (profile), 10 Oct 2018 @ 6:33am

    Re:

    I meet FAR too many people who fail Turing tests...

    link to this | view in thread ]

  25. icon
    Bergman (profile), 10 Oct 2018 @ 6:34am

    Re:

    It smacks of old laws that required a criminal to contact the chief of police and notify the chief of their intent to commit crimes before crossing the city limits.

    Nobody ever obeyed those laws except as a prank.

    link to this | view in thread ]

  26. icon
    Padpaw (profile), 10 Oct 2018 @ 11:41am

    This would be the same people that have publicly stated that the whole walk away movement is nothing but russian propaganda?

    I believe their system for deciding who a bot is and isn't is quite simple.

    Anyone who disagrees with them is a bot and anyone who agrees with what they say isn't a bot.

    link to this | view in thread ]

  27. icon
    btr1701 (profile), 10 Oct 2018 @ 1:25pm

    Terrible People

    > State Senator Robert M. Hertzberg... found himself
    > inundated with bogus attacks by bots, who did everything
    > they could do to scuttle his proposal (note how he didn't
    > clarify how he differentiated between bots or just
    > terrible people).

    So the only options here are bots and terrible people?

    If you opposed the idiotic idea of letting people arrested for serious crimes out of jail until the justice system ground it way around to their trial, you're 'just a terrible person'?

    Or does this apply to whatever Hertzberg proposes in general? Any non-bot who takes issue with him is 'just a terrible person'?

    GTFO

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.