Twitter Bot 'Issues' Death Threat, Police Investigate
from the am-I-my-bot's-keeper? dept
We've seen a partial answer to the question: "what happens if my Silk Road shopping bot buys illegal drugs?" In that case, the local police shut down the art exhibit featuring the bot and seize the purchased drugs. What's still unanswered is who -- if anyone -- is liable for the bot's actions.
These questions are surfacing again thanks to a Twitter bot that somehow managed to tweet out a bomb threat.
This week, police in the Netherlands are dealing with a robot miscreant. Amsterdam-based developer Jeffry van der Goot reports on Twitter that he was questioned by police because a Twitter bot he owned made a death threat.As van der Goot explained is his tweets (all of which can be viewed at the above link), he was contacted by an "internet detective" who had somehow managed to come across this bot's tweet in his investigative work. (As opposed to being contacted by a concerned individual who had spotted the tweet.)
So, van der Goot had to explain how his bot worked. The bot (which was actually created by another person but "owned" by van der Goot) reassembles chunks of his past tweets, hopefully into something approaching coherence. On this occasion, it not only managed to put together a legitimate sentence, but also one threatening enough to attract the interest of local law enforcement.
The explanation didn't manage to completely convince the police of the bot's non-nefariousness. They ordered van der Goot to shut down the account and remove the "threatening" tweet. But it was at least convincing enough that van der Goot isn't facing charges for "issuing" a threat composed of unrelated tweets. The investigator could have easily decided that van der Goot's explanation was nothing more than a cover story for tweets he composed and issued personally, using a bot account to disguise their origin.
The shutdown of the account was most likely for law enforcement's peace of mind -- preventing the very occasionally evil bot from cobbling together algorithmically-derived threats sometime in the future. It's the feeling of having "done something" about an incident that seems alarming at first, but decidely more banal and non-threatening by the end of the investigation.
The answer to the question of who is held responsible when algorithms "go bad" appears to be -- in this case -- the person who "owns" the bot. Van der Goot didn't create the bot, nor did he alter its algorithm, but he was ultimately ordered to kill it off. This order was presumably issued in the vague interest of public safety -- even though there's no way van der Goot could have stacked the deck in favor of bot-crafted threats without raising considerable suspicion in the Twitter account his bot drew from.
There will be more of this in the future and the answers will continue to be unsatisfactory. Criminal activity is usually tied to intent, but with algorithms sifting through data detritus and occasionally latching onto something illegal, that lynchpin of criminal justice seems likely to be the first consideration removed. That doesn't bode well for the bot crafters of the world, whose creations may occasionally return truly unpredictable results. Law enforcement officers seem to have problems wrapping their minds around lawlessness unmoored from the anchoring intent. In van der Goot's case, it resulted in only the largely symbolic sacrifice of his bot. For others, it could turn out much worse.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: autonomous computing, bots, death threats, investigation, jeffry van der goot, police, tweets
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in thread ]
I think this is already settled
The ox (the property) is judged guilty rather than its owner. The authorities seize and dispose of/punish/deal with the ox.
A bot seems no different than an ox.
[ link to this | view in thread ]
[ link to this | view in thread ]
Actually, the bot author made out fairly well here. It was the poor operator running the bot who got in trouble with law enforcement.
[ link to this | view in thread ]
Re: I think this is already settled
[ link to this | view in thread ]
Re: I think this is already settled
Where this gets even trickier is when you move things over into the physical world -- what about self-driving cars? If one harms a person, do we destroy the car? Give it to the victim?
[ link to this | view in thread ]
Alternate Title
Alternate Title: Police Raid and Kill Unarmed Robot.
[ link to this | view in thread ]
Two words: True Threat
Two more words: Prior Restraint
[ link to this | view in thread ]
Of course it's the owner's responsibility
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Of course it's the owner's responsibility
Also, are you really equating bodily harm with a twitter message?
[ link to this | view in thread ]
Re: Re: Of course it's the owner's responsibility
[ link to this | view in thread ]
And in other cases, say cases where something bad can really happen (as in impact negatively other people's life), like stock exchange... the answer seems to be "nobody"...
[ link to this | view in thread ]
Re: Re: Of course it's the owner's responsibility
At no point did they tell me that it was part pit bull.
In any event, he doesn't like people with tattoos or that smoke. Since his previous owner was locked up on drug charges, I'm going to guess that he was sometimes abused by people with tattoos that smoke.
Him attacking people is NEVER what I intend for him to do. But, nevertheless, he will attack anyone he perceives as being "evil".
And yet, if he attacks someone, I am still responsible, even though the shelter lied to me about his breeding.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Totally disagree with you!
That hero of a detective may have just stopped Sky.Net before it ever gained sentience!
[ link to this | view in thread ]
Re: Re: I think this is already settled
As Techdirt has pointed out a few times, police already have countless legal actions against assets, rather than the assets' owners. Like (actual case): United States v. Article Consisting of 50,000 Cardboard Boxes More or Less, Each Containing One Pair of Clacker Balls. And of course against peoples' homes, cars, bank accounts etc.
The US Government sues the item of property, not the person; the owner is effectively a third-party claimant. This does away with any annoying "presumption of innocence" and other rights.
And since it's well established that police can seize assets based on dubious suspicions or common-sense advice from a bank, there's no need to wait until a car harms a person. Fast car? Obviously it's meant for speeding.
[ link to this | view in thread ]
Re:
That is a major reason why there is an Unclassified and Classified network in the military. Plug your Unclassified thumbdrive into the Classified network and you could unleash a bot not only able to create bomb threats, but also the ability to carry them out, with ICBM nukes. No need for a super intelligent A.I.
P.S. Don't trust the silicon diode, and we should be OK.
[ link to this | view in thread ]
Captial punishment
[ link to this | view in thread ]
Re: I think this is already settled
Can you be threatened or damaged by a non-entity? A program is the brainchild of the developer. If their child breaks the law, are they responsible for their crimes? If the end user has to surrender their copy of the bot, can they just download another? The new iteration would be completely innocent of their brother's crime. Can the user or programmer ever be said to be responsible for actions of a program that are essentially random?
[ link to this | view in thread ]
Re:
Nah. It would just calculate for 7.5 million years and then spit out an answer of 42.
[ link to this | view in thread ]
Re: Captial punishment
(1) Clones (more than one copy)
(2) Reincarnation (backups of originals that have expanded their learning databases).
[ link to this | view in thread ]
Re: Re: Re: Of course it's the owner's responsibility
[ link to this | view in thread ]
Re:
Bots don't have free speech rights. People do. I don't think you can simultaneously claim that shutting down the bot is prior restraint, AND that the user had no control over what was said.
(Ignoring that this was in the Netherlands, of course, where the First Amendment doesn't apply. Also ignoring that he was apparently asked - not ordered - to shut down the account.)
[ link to this | view in thread ]
Re: Re:
If corporations can have free speech rights, then why not bots? There's not a huge amount of difference between the two, really.
[ link to this | view in thread ]
Forget Skynet....
[ link to this | view in thread ]
Re: Re: Of course it's the owner's responsibility
Apparently the police did.
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Re: Re:
If a bot was programmed to randomly tweet from a list of political messages that the owner agreed with, the bot would undoubtedly be protected speech. Not because the bot itself really has any rights, but because the person operating the bot has the right to use the bot to further his speech.
[ link to this | view in thread ]
Re: Re: Of course it's the owner's responsibility
I see nothing wrong with the analogy. I made a similar one in a previous article about bot liability (although in mine, the dog only harmed chickens.) In this case, the bot made a threat to harm someone, so comparing it to actual harm is not out of line.
Whoa. You think that dogs never attack when their owners don't want them to? He also didn't say "an attack dog", he said "a dog attack". That's like calling the bot here a "threat bot" instead of calling what happened a "bot threat." Changing the word order here matters.
[ link to this | view in thread ]
[ link to this | view in thread ]
Random Words
[ link to this | view in thread ]
Re: Of course it's the owner's responsibility
The words created and tweeted by the bot are only a threat coming from someone capable of carrying out that threat. A twitter bot cannot manufacture and place a bomb according to its threat, so the words are meaningless in that context.
So, by your analogy, it's not that the dog attacked someone, it's that someone interpreted the way it barked as being an imminent threat despite the fact that it was secured in a place where it could not attack. It might have scared the toddler, but that's all the harm it was capable of doing.
[ link to this | view in thread ]
Nobody wants to reprint the bot's bomb tweet? Cowards!
Why are we all being cowards for not reposting the tweet as part of a critical discussion of this phenomena?
[ link to this | view in thread ]
Re: Nobody wants to reprint the bot's bomb tweet? Cowards!
"He is not identifying the bot and says he has deleted it, per the request of the police"
So, the tweet is no longer publicly visible and the author is not telling anyone which account was used. Unless someone happened to take a screenshot when it was up, it will be hard to get one - although if this did go to court it would presumably become public knowledge at that point.
Nobody's being a "coward", they're just running with the information available. I'm sure that if/when the data becomes available it will be reported on.
[ link to this | view in thread ]
google cars!
do I have to kill it?
How exactly expects the police to have it killed? Only in a bureaucratic- expensive- government- approved- robot- recycling facility?
Are this fees covered by the insurance?
or by Google?
do I get my money back from Google?
Or do I get a just a new car from Google (with the new firmware)
Do all the cars that share the same firmware as my car have to be recalled too?
[ link to this | view in thread ]
Re:
Mighht get the lyrics to 50 cent at worst
[ link to this | view in thread ]
Re: Re: Re: Of course it's the owner's responsibility
No, it did not. To make a threat requires intent. The bot had no such intent, it was just stringing random phrases together. It was certainly not a threat.
[ link to this | view in thread ]
Re: Re: Of course it's the owner's responsibility
That's not QUITE the case. If I mail a white powder to an enemy, it doesn't matter that it's not anthrax and I have no idea how to obtain anthrax. It's still a threat, because the person on the other end doesn't know that.
It's like if the dog is behaving like it's about to attack but it's behind an invisible fence. The passerby would have every reason to be concerned because they don't *know* that the dog can't escape the yard.
So the question becomes: how obvious was it that this was a bot?
[ link to this | view in thread ]
Re: Re: Re: Re: Of course it's the owner's responsibility
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Of course it's the owner's responsibility
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re: Of course it's the owner's responsibility
You might think that, but that's not how LEOs think today. Now, they go by "better safe than sorry." Yeah, he got off, but he's likely out of a job now. Be careful out there.
[ link to this | view in thread ]
Re: Re: Re: Of course it's the owner's responsibility
Still a crappy analogy. You would have had to deliberately put white powder in a box, mail it knowing that white powder is suspicious, deliberately mailed it to a specific person, etc. This is nothing like that - it's merely words, randomly generated ones at that it seems.
"So the question becomes: how obvious was it that this was a bot?"
I don't know, since the account had been deleted and I can't investigate it. Regardless, I'm not saying it should not have been investigated, only that these analogies are hideously bad.
[ link to this | view in thread ]
Bots Ain't Folks
[ link to this | view in thread ]