Twitter Bot 'Issues' Death Threat, Police Investigate
from the am-I-my-bot's-keeper? dept
We've seen a partial answer to the question: "what happens if my Silk Road shopping bot buys illegal drugs?" In that case, the local police shut down the art exhibit featuring the bot and seize the purchased drugs. What's still unanswered is who -- if anyone -- is liable for the bot's actions.
These questions are surfacing again thanks to a Twitter bot that somehow managed to tweet out a bomb threat.
This week, police in the Netherlands are dealing with a robot miscreant. Amsterdam-based developer Jeffry van der Goot reports on Twitter that he was questioned by police because a Twitter bot he owned made a death threat.As van der Goot explained is his tweets (all of which can be viewed at the above link), he was contacted by an "internet detective" who had somehow managed to come across this bot's tweet in his investigative work. (As opposed to being contacted by a concerned individual who had spotted the tweet.)
So, van der Goot had to explain how his bot worked. The bot (which was actually created by another person but "owned" by van der Goot) reassembles chunks of his past tweets, hopefully into something approaching coherence. On this occasion, it not only managed to put together a legitimate sentence, but also one threatening enough to attract the interest of local law enforcement.
The explanation didn't manage to completely convince the police of the bot's non-nefariousness. They ordered van der Goot to shut down the account and remove the "threatening" tweet. But it was at least convincing enough that van der Goot isn't facing charges for "issuing" a threat composed of unrelated tweets. The investigator could have easily decided that van der Goot's explanation was nothing more than a cover story for tweets he composed and issued personally, using a bot account to disguise their origin.
The shutdown of the account was most likely for law enforcement's peace of mind -- preventing the very occasionally evil bot from cobbling together algorithmically-derived threats sometime in the future. It's the feeling of having "done something" about an incident that seems alarming at first, but decidely more banal and non-threatening by the end of the investigation.
The answer to the question of who is held responsible when algorithms "go bad" appears to be -- in this case -- the person who "owns" the bot. Van der Goot didn't create the bot, nor did he alter its algorithm, but he was ultimately ordered to kill it off. This order was presumably issued in the vague interest of public safety -- even though there's no way van der Goot could have stacked the deck in favor of bot-crafted threats without raising considerable suspicion in the Twitter account his bot drew from.
There will be more of this in the future and the answers will continue to be unsatisfactory. Criminal activity is usually tied to intent, but with algorithms sifting through data detritus and occasionally latching onto something illegal, that lynchpin of criminal justice seems likely to be the first consideration removed. That doesn't bode well for the bot crafters of the world, whose creations may occasionally return truly unpredictable results. Law enforcement officers seem to have problems wrapping their minds around lawlessness unmoored from the anchoring intent. In van der Goot's case, it resulted in only the largely symbolic sacrifice of his bot. For others, it could turn out much worse.
Filed Under: autonomous computing, bots, death threats, investigation, jeffry van der goot, police, tweets