from the let's-think-this-through dept
Let me start out by saying that I think online harassment and bullying
is a significant problem -- though also one that is often misrepresented and distorted. I worry about the very real consequences of those who are bullied, harassed and threatened online, in that it can often lead to silencing voices that need to be heard, or even causing some to not even bother to participate for fear of the resulting bullying. That said, way too frequently, it seems that those who are speaking out about online bullying assume that the best way to deal with this is to move to push for censorship as the solution. This rarely works. Too frequently we see "cyberbullying" being used as a catchall for attacking speech people simply do not like. Even here at Techdirt, people who dislike our viewpoint will frequently claim that we "bullied" someone, merely for pointing out and discussing statements or arguments that we find questionable.
There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds -- though I think it's an important goal to strive for. But I fear the seemingly simple idea of "silence those accused of bullying" will have incredibly negative consequences (with almost none of the expected benefits). We already see many attempts to censor speech that people dislike online, with frequent cases of abusive copyright takedown notices or bogus claims of defamation. Giving people an
additional tool to silence such speech will be abused widely, creating tremendous damage.
We already see this in the form of ContentID from YouTube. A tool that was created with good intent, to deal with copyright infringement on the site, is all too often used to
silence speech on the site, either to
silence a critic or just through
over-aggressive robots.
So, imagine what a total mess it would be if we had a ContentID for online bullying. And yet, it appears that the good folks at SRI
are trying to build exactly that. Now, SRI certainly has led the way with many computing advancements, but it's not clear to me how this solution could possibly do anything other than create new headaches:
But what if you didn’t need humans to identify when online abuse was happening? If a computer was smart enough to spot cyberbullying as it happened, maybe it could be halted faster, without the emotional and financial costs that come with humans doing the job. At SRI International, the Silicon Valley incubator where Apple’s Siri digital assistant was born, researchers believe they’ve developed algorithms that come close to doing just that.
“Social networks are overwhelmed with these kinds of problems, and human curators can’t manage the load,” says Normal Winarsky, president of SRI Ventures. But SRI is developing an artificial intelligence with a deep understanding of how people communicate online that he says can help.
This is certainly going to sound quite appealing to those who push for anti-cyberbullying campaigns. But, at what cost? Again, there are legitimate concerns about people who are being harassed. But one person's cyberbullying could just be another person's aggressive debate tactics. Hell, I'd argue that abusing tools like contentID or false defamation claims are a form of "cyberbullying" as well. Thus, it's quite possible that the same would be true of this new tool, which can be used to "bully" those the algorithm decides is bullying as well.
Determining copyright infringement is already much more difficult than people imagine -- which is why ContentID makes so many errors. You have to take into account context, fair use, de minimis use, parody, etc. That's not easy for a machine. But at least there are some direct rules about what truly is "copyright infringement." With "bullying" or "harassment," there is no clear legal definition to match up to and it's often very much in the eye of the beholder. As such, any tool that is used to "deal" with cyberbullying is going to create tremendous problems, often just from misunderstandings between multiple people. And that could create a real chilling effect on speech.
Perhaps instead of focusing so much technical know-how on "detecting" and trying to "block" cyberbullying, we should be spending more time looking for ways to
positively reinforce good behavior online. We've built up this belief that the only way to encourage good behavior online is to punish bad behavior. But we've got enough evidence at this point showing how rarely this actually works, that it seems like perhaps it's time for a different approach. And a "ContentID for harassment" seems unlikely to help.
Filed Under: automation, bullying, contentid, cyberbullying, harassment
Companies: sri