Twitter Bot 'Issues' Death Threat, Police Investigate

from the am-I-my-bot's-keeper? dept

We've seen a partial answer to the question: "what happens if my Silk Road shopping bot buys illegal drugs?" In that case, the local police shut down the art exhibit featuring the bot and seize the purchased drugs. What's still unanswered is who -- if anyone -- is liable for the bot's actions.

These questions are surfacing again thanks to a Twitter bot that somehow managed to tweet out a bomb threat.

This week, police in the Netherlands are dealing with a robot miscreant. Amsterdam-based developer Jeffry van der Goot reports on Twitter that he was questioned by police because a Twitter bot he owned made a death threat.
As van der Goot explained is his tweets (all of which can be viewed at the above link), he was contacted by an "internet detective" who had somehow managed to come across this bot's tweet in his investigative work. (As opposed to being contacted by a concerned individual who had spotted the tweet.)

So, van der Goot had to explain how his bot worked. The bot (which was actually created by another person but "owned" by van der Goot) reassembles chunks of his past tweets, hopefully into something approaching coherence. On this occasion, it not only managed to put together a legitimate sentence, but also one threatening enough to attract the interest of local law enforcement.

The explanation didn't manage to completely convince the police of the bot's non-nefariousness. They ordered van der Goot to shut down the account and remove the "threatening" tweet. But it was at least convincing enough that van der Goot isn't facing charges for "issuing" a threat composed of unrelated tweets. The investigator could have easily decided that van der Goot's explanation was nothing more than a cover story for tweets he composed and issued personally, using a bot account to disguise their origin.

The shutdown of the account was most likely for law enforcement's peace of mind -- preventing the very occasionally evil bot from cobbling together algorithmically-derived threats sometime in the future. It's the feeling of having "done something" about an incident that seems alarming at first, but decidely more banal and non-threatening by the end of the investigation.

The answer to the question of who is held responsible when algorithms "go bad" appears to be -- in this case -- the person who "owns" the bot. Van der Goot didn't create the bot, nor did he alter its algorithm, but he was ultimately ordered to kill it off. This order was presumably issued in the vague interest of public safety -- even though there's no way van der Goot could have stacked the deck in favor of bot-crafted threats without raising considerable suspicion in the Twitter account his bot drew from.

There will be more of this in the future and the answers will continue to be unsatisfactory. Criminal activity is usually tied to intent, but with algorithms sifting through data detritus and occasionally latching onto something illegal, that lynchpin of criminal justice seems likely to be the first consideration removed. That doesn't bode well for the bot crafters of the world, whose creations may occasionally return truly unpredictable results. Law enforcement officers seem to have problems wrapping their minds around lawlessness unmoored from the anchoring intent. In van der Goot's case, it resulted in only the largely symbolic sacrifice of his bot. For others, it could turn out much worse.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: autonomous computing, bots, death threats, investigation, jeffry van der goot, police, tweets


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Michael, 12 Feb 2015 @ 9:32am

    It came from the internet, so it is actually Google's fault.

    link to this | view in chronology ]

  • identicon
    JustShutUpAndObey, 12 Feb 2015 @ 9:35am

    I think this is already settled

    The principle of civil asset forfeiture historically derives from what to do when somebody's ox gores a neighbor.
    The ox (the property) is judged guilty rather than its owner. The authorities seize and dispose of/punish/deal with the ox.
    A bot seems no different than an ox.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 9:36am

    So, what did it said?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 9:41am

    "That doesn't bode well for the bot crafters of the world"

    Actually, the bot author made out fairly well here. It was the poor operator running the bot who got in trouble with law enforcement.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 9:59am

    Alternate Title

    but he was ultimately ordered to kill it off.

    Alternate Title: Police Raid and Kill Unarmed Robot.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 10:03am

    In america...

    Two words: True Threat

    Two more words: Prior Restraint

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 12 Feb 2015 @ 1:21pm

      Re:

      Does prior restraint apply when the defense is "I had no control over that speech"? How can the First Amendment be implicated when the speech in question is disavowed?

      Bots don't have free speech rights. People do. I don't think you can simultaneously claim that shutting down the bot is prior restraint, AND that the user had no control over what was said.

      (Ignoring that this was in the Netherlands, of course, where the First Amendment doesn't apply. Also ignoring that he was apparently asked - not ordered - to shut down the account.)

      link to this | view in chronology ]

      • icon
        John Fenderson (profile), 12 Feb 2015 @ 1:32pm

        Re: Re:

        "Bots don't have free speech rights. People do."

        If corporations can have free speech rights, then why not bots? There's not a huge amount of difference between the two, really.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 12 Feb 2015 @ 1:49pm

          Re: Re: Re:

          It's actually very similar, yes. In either case, it's not the bots or corporations that *really* have the rights, but the human owners and operators which are just *using* the bot or corporation to speak.

          If a bot was programmed to randomly tweet from a list of political messages that the owner agreed with, the bot would undoubtedly be protected speech. Not because the bot itself really has any rights, but because the person operating the bot has the right to use the bot to further his speech.

          link to this | view in chronology ]

  • icon
    JoeCool (profile), 12 Feb 2015 @ 10:42am

    Of course it's the owner's responsibility

    Just like a dog attack. You didn't create the dog - you bought it from a store. You didn't train the dog - you paid someone else to do that. But when it rips the face off a toddler, you're the one to pay any damages and put the dog down.

    link to this | view in chronology ]

    • icon
      John Fenderson (profile), 12 Feb 2015 @ 11:02am

      Re: Of course it's the owner's responsibility

      Whether or not you trained the attack dog, when it attacks then it's doing exactly what you intended it to do. That's a bit different than what this bot did.

      Also, are you really equating bodily harm with a twitter message?

      link to this | view in chronology ]

      • icon
        Nastybutler77 (profile), 12 Feb 2015 @ 11:15am

        Re: Re: Of course it's the owner's responsibility

        Sticks and stones may break my bones, but bot tweets are repugnant and must be stopped at all costs.

        link to this | view in chronology ]

      • identicon
        PRMan, 12 Feb 2015 @ 11:22am

        Re: Re: Of course it's the owner's responsibility

        I have a sweet dog that I got at a shelter. He loves us to death.

        At no point did they tell me that it was part pit bull.

        In any event, he doesn't like people with tattoos or that smoke. Since his previous owner was locked up on drug charges, I'm going to guess that he was sometimes abused by people with tattoos that smoke.

        Him attacking people is NEVER what I intend for him to do. But, nevertheless, he will attack anyone he perceives as being "evil".

        And yet, if he attacks someone, I am still responsible, even though the shelter lied to me about his breeding.

        link to this | view in chronology ]

        • icon
          John Fenderson (profile), 12 Feb 2015 @ 12:38pm

          Re: Re: Re: Of course it's the owner's responsibility

          Breeding does not make a dog into an attack dog. Training does that. Or abuse, which can act as training.

          link to this | view in chronology ]

      • icon
        tqk (profile), 12 Feb 2015 @ 1:47pm

        Re: Re: Of course it's the owner's responsibility

        Also, are you really equating bodily harm with a twitter message?

        Apparently the police did.

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 12 Feb 2015 @ 2:09pm

        Re: Re: Of course it's the owner's responsibility

        Also, are you really equating bodily harm with a twitter message?


        I see nothing wrong with the analogy. I made a similar one in a previous article about bot liability (although in mine, the dog only harmed chickens.) In this case, the bot made a threat to harm someone, so comparing it to actual harm is not out of line.

        when it attacks then it's doing exactly what you intended it to do.


        Whoa. You think that dogs never attack when their owners don't want them to? He also didn't say "an attack dog", he said "a dog attack". That's like calling the bot here a "threat bot" instead of calling what happened a "bot threat." Changing the word order here matters.

        link to this | view in chronology ]

        • icon
          John Fenderson (profile), 13 Feb 2015 @ 8:21am

          Re: Re: Re: Of course it's the owner's responsibility

          "In this case, the bot made a threat to harm someone"

          No, it did not. To make a threat requires intent. The bot had no such intent, it was just stringing random phrases together. It was certainly not a threat.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 13 Feb 2015 @ 11:41am

            Re: Re: Re: Re: Of course it's the owner's responsibility

            Well, OK, the bot had no intent. But if it wasn't clear that the bot was a bot, then people wouldn't KNOW that and it would be reasonable for them to feel threatened.

            link to this | view in chronology ]

            • icon
              John Fenderson (profile), 13 Feb 2015 @ 3:21pm

              Re: Re: Re: Re: Re: Of course it's the owner's responsibility

              I suppose that it might be reasonable for someone to feel threatened -- it's hard to tell, since I can't find the actual "threatening" tweet. But that someone felt threatened shouldn't be (and isn't in the US) the sole point that determines if something is a threat or not.

              link to this | view in chronology ]

              • icon
                tqk (profile), 13 Feb 2015 @ 4:24pm

                Re: Re: Re: Re: Re: Re: Of course it's the owner's responsibility

                But that someone felt threatened shouldn't be (and isn't in the US) the sole point that determines if something is a threat or not.

                You might think that, but that's not how LEOs think today. Now, they go by "better safe than sorry." Yeah, he got off, but he's likely out of a job now. Be careful out there.

                link to this | view in chronology ]

    • icon
      PaulT (profile), 12 Feb 2015 @ 11:35pm

      Re: Of course it's the owner's responsibility

      I see the point you're driving at, but it's a very flawed analogy in this case.

      The words created and tweeted by the bot are only a threat coming from someone capable of carrying out that threat. A twitter bot cannot manufacture and place a bomb according to its threat, so the words are meaningless in that context.

      So, by your analogy, it's not that the dog attacked someone, it's that someone interpreted the way it barked as being an imminent threat despite the fact that it was secured in a place where it could not attack. It might have scared the toddler, but that's all the harm it was capable of doing.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 13 Feb 2015 @ 11:31am

        Re: Re: Of course it's the owner's responsibility


        The words created and tweeted by the bot are only a threat coming from someone capable of carrying out that threat. A twitter bot cannot manufacture and place a bomb according to its threat, so the words are meaningless in that context.


        That's not QUITE the case. If I mail a white powder to an enemy, it doesn't matter that it's not anthrax and I have no idea how to obtain anthrax. It's still a threat, because the person on the other end doesn't know that.

        It's like if the dog is behaving like it's about to attack but it's behind an invisible fence. The passerby would have every reason to be concerned because they don't *know* that the dog can't escape the yard.

        So the question becomes: how obvious was it that this was a bot?

        link to this | view in chronology ]

        • icon
          PaulT (profile), 14 Feb 2015 @ 3:00am

          Re: Re: Re: Of course it's the owner's responsibility

          "If I mail a white powder to an enemy, it doesn't matter that it's not anthrax and I have no idea how to obtain anthrax."

          Still a crappy analogy. You would have had to deliberately put white powder in a box, mail it knowing that white powder is suspicious, deliberately mailed it to a specific person, etc. This is nothing like that - it's merely words, randomly generated ones at that it seems.

          "So the question becomes: how obvious was it that this was a bot?"

          I don't know, since the account had been deleted and I can't investigate it. Regardless, I'm not saying it should not have been investigated, only that these analogies are hideously bad.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 10:52am

    So if we have an infinite number of blots we should get the complete works of William Shakespeare and 50 Shades of Gray.

    link to this | view in chronology ]

    • identicon
      PRMan, 12 Feb 2015 @ 11:23am

      Re:

      Infinitely easier than evolution. Start it up and let us know when it finishes...

      link to this | view in chronology ]

    • identicon
      DogBreath, 12 Feb 2015 @ 12:15pm

      Re:

      and Skynet. Never forget we are just a bot tweet away from Skynet.

      That is a major reason why there is an Unclassified and Classified network in the military. Plug your Unclassified thumbdrive into the Classified network and you could unleash a bot not only able to create bomb threats, but also the ability to carry them out, with ICBM nukes. No need for a super intelligent A.I.

      P.S. Don't trust the silicon diode, and we should be OK.

      link to this | view in chronology ]

    • icon
      Gwiz (profile), 12 Feb 2015 @ 12:22pm

      Re:

      So if we have an infinite number of blots we should get the complete works of William Shakespeare and 50 Shades of Gray.

      Nah. It would just calculate for 7.5 million years and then spit out an answer of 42.

      link to this | view in chronology ]

    • identicon
      The Bot, 13 Feb 2015 @ 7:42am

      Re:

      So if we have an infinite number of blots we should get the complete works of William Shakespeare and 50 Shades of Gray.


      Mighht get the lyrics to 50 cent at worst

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 11:19am

    The answer to the question of who is held responsible when algorithms "go bad" appears to be -- in this case -- the person who "owns" the bot.

    And in other cases, say cases where something bad can really happen (as in impact negatively other people's life), like stock exchange... the answer seems to be "nobody"...

    link to this | view in chronology ]

  • identicon
    Sarah Connor, 12 Feb 2015 @ 11:30am

    Totally disagree with you!

    I totally disagree with you. First, bots make threatening tweets, then they get access to nukes, the human race becomes hunted by these bots and the next thing you know, we're sending people back in time to stop those tweets from ever happening!

    That hero of a detective may have just stopped Sky.Net before it ever gained sentience!

    link to this | view in chronology ]

  • identicon
    Anonymous Hero, 12 Feb 2015 @ 12:18pm

    Captial punishment

    So, is deleting the bot the functional equivalent of capital punishment?

    link to this | view in chronology ]

    • identicon
      DogBreath, 12 Feb 2015 @ 12:31pm

      Re: Captial punishment

      Yes, but with all the benefits of the Humanoid Cylons:

      (1) Clones (more than one copy)

      (2) Reincarnation (backups of originals that have expanded their learning databases).

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 1:38pm

    Forget Skynet....

    ...when a hacker puts something like this into (insert any institution here)'s system and it starts sending out such threatening messages then I'll be worried. And as already mentioned the system's owner will still be responsible.

    link to this | view in chronology ]

  • icon
    Padpaw (profile), 12 Feb 2015 @ 1:49pm

    I suspect he will be arrested for resisting arrest and accidently fall down the stairs, if the police decide to harass him over this as they fear for their safety

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 4:49pm

    I want to know what the threat was that got the police interested.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Feb 2015 @ 9:16pm

    Random Words

    If random numbers can be illegal, why not random words?

    link to this | view in chronology ]

  • identicon
    Joe Blough, 13 Feb 2015 @ 5:30am

    Nobody wants to reprint the bot's bomb tweet? Cowards!

    Has been asked a few times in these comments -> Where is the offending tweet? The link to fusion.net story DOES NOT include a reprint of the tweet in question.

    Why are we all being cowards for not reposting the tweet as part of a critical discussion of this phenomena?

    link to this | view in chronology ]

    • icon
      PaulT (profile), 13 Feb 2015 @ 6:00am

      Re: Nobody wants to reprint the bot's bomb tweet? Cowards!

      From the linked story:

      "He is not identifying the bot and says he has deleted it, per the request of the police"

      So, the tweet is no longer publicly visible and the author is not telling anyone which account was used. Unless someone happened to take a screenshot when it was up, it will be hard to get one - although if this did go to court it would presumably become public knowledge at that point.

      Nobody's being a "coward", they're just running with the information available. I'm sure that if/when the data becomes available it will be reported on.

      link to this | view in chronology ]

  • identicon
    google cars!, 13 Feb 2015 @ 7:03am

    google cars!

    So if my google- car kills my neighbour's kid/pet/grandma...

    do I have to kill it?
    How exactly expects the police to have it killed? Only in a bureaucratic- expensive- government- approved- robot- recycling facility?
    Are this fees covered by the insurance?
    or by Google?
    do I get my money back from Google?
    Or do I get a just a new car from Google (with the new firmware)
    Do all the cars that share the same firmware as my car have to be recalled too?

    link to this | view in chronology ]

  • identicon
    twitchviewerbot, 27 Mar 2015 @ 9:20pm

    Bots Ain't Folks

    The case law discussion shows how outdated the precedents are involving the status of bots or related apps. And what if the bot was open source? Who could be sanctioned for malware or theft outcomes in that case?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.