Cali Lawmakers Pushing For 72-Hour Bot Removal Requirements For Social Media Companies

from the bad-idea,-worse-implementation dept

Following in the footsteps of misguided European lawmakers, California legislators have introduced a time-sensitive "remove speech or else" law targeting social media sites.

They’ve introduced a bill that would give online platforms such as Facebook and Twitter three days to investigate whether a given account is a bot, to disclose that it’s a bot if it is in fact auto-generated, or to remove the bot outright.

The bill would make it illegal for anyone to use an automated account to mislead the citizens of California or to interact with them without disclosing that they’re dealing with a bot. Once somebody reports an illegally undisclosed bot, the clock would start ticking for the social media platform on which it’s found. The platforms would also be required to submit a bimonthly report to the state’s Attorney General detailing bot activity and what corrective actions were taken.

This is ridiculous for a number of reasons. First, it assumes the purpose of most bots is to mislead, hence the "need" for upfront disclosure. The ridiculousness of this part of the law's many faulty premises is only further underscored by a bot created by the legislator behind the bill, Bob Hertzberg. His bot's bio says [emphasis added]:

I am a bot. Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot! #SB1001 #BotHertzberg

Hertzberg's bot must have been made to "misinform and exploit users," at least according to its own Twitter bio. And yet, the account's tweets appear to disseminate actual correct info, like subcommittee webcasts and community-oriented info. It's good the bot is transparent. But it's terrible because the transparency immediately follows a line claiming automated accounts are made apparently solely to misinform people.

Plenty of automated accounts never misinform or exploit users. Techdirt's account automatically tweets each newly-published post. So do countless other bots tied into content-management systems. But the bill -- and bill creator's own words -- paint bots as evil, even while deploying a bot in an abortive attempt to make a point.

Going on from there, the bill demands sites create a portal for bot reporting and starts the removal clock when a report is made. User reporting may function better than algorithms when detecting bots spreading misinformation (putting bots in charge of bot removal), but this still puts social media companies in the uncomfortable position of being arbiters of truth. And if they make the "wrong" decision and leave a bot up, the government is free to punish them for noncompliance.

The bill also provides no avenue for those targeted to challenge a bot report or removal. (And no option for sites to challenge the government's determination that they've failed to remove bots.) This is a key omission which will lead to unchecked abuse.

Finally, there's the motivation for the bill. Some of it stems from a desire to punish "fake news," a term no government has ever clearly defined. Some of it comes from evidence of Russian interference in the last presidential election. But much of the bill's impetus is tied to vague notions of "rightness." Hertzberg himself exhumes a long-dead catchphrase to justify his bill's existence.

"We need to know if we are having debates with real people or if we’re being manipulated," said Democratic State Senator Bob Hertzberg, who introduced the bill. "Right now we have no law and it’s just the Wild West."

So, summary executions of bots by social media posse members? Is that the "Wild West" you mean, one historically notorious for its lack of due process and violent overreactions?

Here's the other excuse for bad lawmaking, via an advocate for terrible legislation.

"California feels a bit guilty about how our hometown companies have had a negative impact on society as a whole,” said Shum Preston, the national director of advocacy and communications at Common Sense Media, a major supporter of Hertzberg’s bill. “We are looking to regulate in the absence of the federal government. We don’t think anything is coming from Washington."

So, secondhand guilt justifies the direct regulation of third-party service providers? That's almost worse than no reason at all.

And this isn't the only bad bot bill being considered. Assemblymember Marc Levine wants all bots to tied to verified human beings. The same goes for any online advertising purchases. Levine feels his bill will help fight the bot problem, but his belief is predicated on a profound misunderstanding of human behavior.

By identifying bots, users will be better informed and able to identify whether or not the power of a group’s influence is legitimate. This will mitigate the promulgation of misinformation and influence of unauthentic social media campaigns.

Yes, telling people the stuff they think is legitimate isn't legitimate always results in people ditching "illegitimate" news sources. Especially when that info is coming from a government they don't like presiding over a state many wish would just fall into the ocean. Trying to fight a bot problem largely associated with alt-right groups with legislation from coastal elites is sure to win hearts and minds.

A bot-reporting portal with no recourse provisions -- and a possibile "real name" requirement added into the mix -- will become little more than a handy tool for harassment and hecklers. The cost of these efforts will be borne entirely by social media companies, which also will be held responsible for the mere existence of bots the Californian government feels might be misleading its residents. It's bad lawmaking all around, propelled by misplaced guilt and overstated fears about the democratic process.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: automation, bob hertzberg, bots, california, censorship, fake news, marc levine, regulations, social media, speech


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Mason Wheeler (profile), 6 Apr 2018 @ 12:18pm

    This is ridiculous for a number of reasons. First, it assumes the purpose of most bots is to mislead, hence the "need" for upfront disclosure.

    Remove a tiny bit of oversimplification and it becomes a whole lot less ridiculous:

    It assumes the purpose of most bots that pretend to be people rather than bots is to mislead

    Not only is this not ridiculous, it's trivially true.

    link to this | view in chronology ]

    • icon
      Mark Murphy (profile), 6 Apr 2018 @ 1:07pm

      Re:

      Quoting the legislation:

      “Bot” means a machine, device, computer program, or other computer software that is designed to mimic or behave like a natural person such that a reasonable natural person is unable to discern its artificial identity.

      Nobody is hand-flipping bits in a drive when they post to online platforms. They post via software (or, on occasion, butterflies).

      So, when you posted your comment, most likely you used a Web browser. Are you a bot? After all, you did not hand-flip bits in a drive at a Techdirt server. You used a "computer program".

      If you wish to claim that using a Web browser does not make one a bot, then the implication is that the source of the material typed into the Web browser is what determines "bot-ness" (bot-osity? bot-itude?) But I doubt that many of the Russian trolls used artificial intelligence to generate their posts from whole cloth. Rather, most likely, the origin of the posts were human, with software doing things like making mild random alterations, such as word substitution, to help defeat anti-spam measures, along with bulk posting.

      So, where is the dividing line? Does the use of a spell-checker make one a bot? After all, by definition, that spell-checker auto-generated part of the post, substituting words that appear to come from a "natural person". What about mobile social network clients that offer suggested basic replies? Does that make their users bots, if they choose a canned reply, if that canned reply appears to come from a "natural person"? Does retweeting make one a bot?

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Apr 2018 @ 2:29pm

      Re:

      99.9% of bots I see, are to Advertise to me, redirect me to another site, to sell me something, to apply trackers..

      A lie?
      Is worthless to a REASONABLY smart person..
      MOST of the lies we get are based on 1 fact..NO INPUT..our gov. isnt telling us ANYTHING.. There is so much BS, flying around that its hard to tell Who, what , where, When, how, WHY..or if has anything to do with them HOLDING A JOB..

      link to this | view in chronology ]

  • identicon
    Julius Cortes, 6 Apr 2018 @ 12:30pm

    Bot . . . coin.

    Simple is best . . . just make it illegal to deploy any program that hides its real identity to pose as a human.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 12:35pm

    Going on from there, the bill demands sites create a portal for bot reporting and starts the removal clock when a report is made.

    And that opens up a new vector of attack on social media sites, and possible force them into auto disable if they cannot examine the account within the time limit due to the number of reports. It could become a new vector of denial of service attack via botnets.

    link to this | view in chronology ]

    • identicon
      Billy Vierra, 8 Apr 2018 @ 6:12am

      Re:

      Business idea... make a bot to report bots that will spin up a new bot and auto report itself...

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 12:41pm

    well it's not part of social media, but if all bots are bad, doesn't that mean search engines (which rely on 'webcrawler' bots) bad as well?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 1:02pm

    Wait, what?

    "Trying to fight a bot problem largely associated with alt-right groups with legislation from coastal elites is sure to win hearts and minds."

    What do bots have to do with the alt-right? That was a bit of a drive-by statement, there. Care to elaborate?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 2:01pm

    What are they going to do about those bots employed in public offices?

    link to this | view in chronology ]

  • identicon
    AdvertisersWin, 6 Apr 2018 @ 3:58pm

    Now design not bots will drive usage

    This is good for the industry. They just don't know it yet.

    Bots drove fake ad clicks, bots drove fake viewership numbers which relate to ad revenue and how much is paid out.

    This is going to remove a liability for sites and make sites wholly own, wholly responsible for their content or collected works.

    If it's on a site, that site is now entirely responsible. It removes layers of accountability and makes requests or removals the responsibility of the site, not a third party.

    Now the real value propositions come.
    The clicks are reliable, the revenues are based on real value not fake bot accounts clicking links or falsely driving conversations.

    If any social media company is worth their use it will be based on actual users liking the platform design now. Now companies that want to sell, will have a level playing field, advertisers can better gauge actual sales.

    It's win for companies that want to utilize social media to sell, it's a win win advertisers.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 4:34pm

    Stupid and the problem is not new

    This isn't even new. Spam has been around for over forty years. Troll farming has been a thing for well over a decade if you count the fifty cent party for one much less review writing. Turing testing everyone won't stop the propaganda much less the utility. The problem is human.

    A slightly less dumb idea would be to meet with industry groups and suggest refactoring to have bot vs non-bot flagging with accounts transparently - especially if they are a service that promotes API-posting for legitimate usage. Say if Twitter had blue borders around bot posted tweets but that is very much a fig leaf.

    Even putting aside all of the cynical reasons not to do so (advertisers, inflating apparent user count) securing it would be problematic. Computers can simulate all of the input like a user down to mouse clicks and typing. Throwing captchas everywhere would deter users and make the interface more clunky making them reluctant to do so. Even worse it wouldn't work given that captcha solving programs are all over the place often offering some e-currency or game asset for doing so.

    Really if you want to fight propaganda fund critical thinking courses for everyone and offer tax credits or subsidies for taking them it is harder to circumvent.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 5:33pm

    someone was embarrassed when they discovered they were trying to PUA a bot.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Apr 2018 @ 6:28pm

    reasons largely political

    Because of the widely-held belief that pro-Trump bots on social media sites swung the 2016 presidential election, it should not surprise anyone that the vehemently anti-Trump state of California would spearhead this effort to crack down on bots of all kinds (as it would look awfully bad to pass a law that only applied to one political party).

    Facebook and Twitter engaged in a massive search-and-destroy mission for pro-Trump (as well as pro-Le Pen) bots, and even bragged to the press about it -- one of the only times these normally secretive (and widely despised) pogroms were ever admitted. As with selective "leaks" to the press, selective rule enforcement by partisan business owners is of course nothing new, as Amazon's Jeff Bezos has demonstrated numerous times when user-posted reviews are selectively tampered with.

    Had Trump lost the election -- and had that loss been blamed on anti-trump bots -- then there's little doubt that California's politicians would be quite happy with the situation, just as they were when Obama's 2008 win was in part attributed to social media dominance (bots included)

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 7 Apr 2018 @ 1:02am

    I can live with fake-news. I'll take care of getting myself informed.

    But now that we are with bots, I'd love to see spam calls banned (they are usually made with bots and if you pick, a human operator picks to sell you their shit).


    I find those fucking annoying. MOREOVER WHEN THEY CALL YOU AT 2 AM.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 7 Apr 2018 @ 5:58am

    Re: use an automated account to mislead the citizens

    So CA just banned advertising?

    No really. From a technical standpoint, profile driven advertising is not easy to distinguish from bots. And advertising is by definition deceptive. "building value", is not distinguishable from "lying to convince some dumb shit that the market value is different than the utility value".

    So there is to be no more advertising in CA? Cool. Moving there. Sounds great!

    Pretty much all IT law passed in the past decade says the same thing:

    "We don't understand WTF is going on, but WHAAA! Us good! Not-us bad! Here is a bunch of arbitrary bullshit that means nothing, that we will summarily use to persecute anyone we don't like."

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 7 Apr 2018 @ 6:29am

    Politicians exist to misinform and exploit people.

    link to this | view in chronology ]

  • icon
    krickit (profile), 7 Apr 2018 @ 12:14pm

    Does this mean I'm going to have to go back to writing hundreds of comments over dozens of accounts by hand in order to get my crappy opinions across? So inconvenient.

    link to this | view in chronology ]

  • icon
    fairuse (profile), 7 Apr 2018 @ 3:13pm

    State Will Get $ Upfront - Speed Camera Trap Mod 4 Bots

    "Once somebody reports an illegally undisclosed bot,(..)", please find clothes for it.

    (begin simulation)
    Bot was declared misleading. It wanted to lead.

    Its job was helpful not damaging to public. Its function was to notify taxpayers when state lawmakers meet; Also, update schedule, alert writers, and return estimate of citizens attending.

    The 72 hour starts ... 0 hour. Now what can be done for this homeless bot?

    Why is the press absent?
    No comments by taxpayers because they were not notified.
    Bills for zoning, water management and council pay raise pass without objection.

    City Hall con game drives this kind of bill not the general public's well being.
    (end simulation)

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 8 Apr 2018 @ 4:01pm

    a state many wish would just fall into the ocean

    K(arma)limate change will probably make sure of that.

    link to this | view in chronology ]

  • identicon
    SilverBlade, 8 Apr 2018 @ 6:41pm

    Dating sites?

    Would this also apply to dating sites to remove fake profiles too?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.