A ContentID For Online Bullying? What Could Possibly Go Wrong...
from the let's-think-this-through dept
Let me start out by saying that I think online harassment and bullying is a significant problem -- though also one that is often misrepresented and distorted. I worry about the very real consequences of those who are bullied, harassed and threatened online, in that it can often lead to silencing voices that need to be heard, or even causing some to not even bother to participate for fear of the resulting bullying. That said, way too frequently, it seems that those who are speaking out about online bullying assume that the best way to deal with this is to move to push for censorship as the solution. This rarely works. Too frequently we see "cyberbullying" being used as a catchall for attacking speech people simply do not like. Even here at Techdirt, people who dislike our viewpoint will frequently claim that we "bullied" someone, merely for pointing out and discussing statements or arguments that we find questionable.There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds -- though I think it's an important goal to strive for. But I fear the seemingly simple idea of "silence those accused of bullying" will have incredibly negative consequences (with almost none of the expected benefits). We already see many attempts to censor speech that people dislike online, with frequent cases of abusive copyright takedown notices or bogus claims of defamation. Giving people an additional tool to silence such speech will be abused widely, creating tremendous damage.
We already see this in the form of ContentID from YouTube. A tool that was created with good intent, to deal with copyright infringement on the site, is all too often used to silence speech on the site, either to silence a critic or just through over-aggressive robots.
So, imagine what a total mess it would be if we had a ContentID for online bullying. And yet, it appears that the good folks at SRI are trying to build exactly that. Now, SRI certainly has led the way with many computing advancements, but it's not clear to me how this solution could possibly do anything other than create new headaches:
But what if you didn’t need humans to identify when online abuse was happening? If a computer was smart enough to spot cyberbullying as it happened, maybe it could be halted faster, without the emotional and financial costs that come with humans doing the job. At SRI International, the Silicon Valley incubator where Apple’s Siri digital assistant was born, researchers believe they’ve developed algorithms that come close to doing just that.This is certainly going to sound quite appealing to those who push for anti-cyberbullying campaigns. But, at what cost? Again, there are legitimate concerns about people who are being harassed. But one person's cyberbullying could just be another person's aggressive debate tactics. Hell, I'd argue that abusing tools like contentID or false defamation claims are a form of "cyberbullying" as well. Thus, it's quite possible that the same would be true of this new tool, which can be used to "bully" those the algorithm decides is bullying as well.
“Social networks are overwhelmed with these kinds of problems, and human curators can’t manage the load,” says Normal Winarsky, president of SRI Ventures. But SRI is developing an artificial intelligence with a deep understanding of how people communicate online that he says can help.
Determining copyright infringement is already much more difficult than people imagine -- which is why ContentID makes so many errors. You have to take into account context, fair use, de minimis use, parody, etc. That's not easy for a machine. But at least there are some direct rules about what truly is "copyright infringement." With "bullying" or "harassment," there is no clear legal definition to match up to and it's often very much in the eye of the beholder. As such, any tool that is used to "deal" with cyberbullying is going to create tremendous problems, often just from misunderstandings between multiple people. And that could create a real chilling effect on speech.
Perhaps instead of focusing so much technical know-how on "detecting" and trying to "block" cyberbullying, we should be spending more time looking for ways to positively reinforce good behavior online. We've built up this belief that the only way to encourage good behavior online is to punish bad behavior. But we've got enough evidence at this point showing how rarely this actually works, that it seems like perhaps it's time for a different approach. And a "ContentID for harassment" seems unlikely to help.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: automation, bullying, contentid, cyberbullying, harassment
Companies: sri
Reader Comments
Subscribe: RSS
View by: Time | Thread
Think of the children
In a very real sense people are being childish when they lash out on the internet - rationality goes out the window - so better to use these techniques for them too.
[ link to this | view in chronology ]
Great job guys! When "Close" becomes "Nailed It!", please let the internet know. Until then, please keep your incredibly bad idea to yourselves.
[ link to this | view in chronology ]
On the other hand ...
When they do take down the wrong things there Must be a way to disagree and override the take down. The current YouTube system is not adequate. The should also be a way for the operators to lock out repeat offenders, including an appeal process if they disagree.
Any system that has an automated take down should have shield setting. This would prevent content that has been determined to be 'acceptable' from being taken down automatically. This would cover fair use or repeated bogus take downs on content that someone finds disagreeable.
This is not a problem that will be quickly solved, if ever. What I don't understand is why Google has not made improvements in YouTube. They must not be making any money off the thing and have a tight budget.
[ link to this | view in chronology ]
Re: On the other hand ...
And that right there is the backward thinking that is the problem. People think it is acceptable to block or take down content that isn't illegal as long as there is a way to get it back.
That is NOT OK.
In addition, anyone claiming that have a system that supposedly can identify illegal content is simply lying. Much of this content cannot be identified as illegal until there has actually been a court ruling. Anything that takes the content down and then allows it to be restored after a ruling is effectively locking people in prison until there is a trial to determine if they are guilty.
[ link to this | view in chronology ]
Re: On the other hand ...
Maybe to you, but not to a majority of those potentially affected.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Over the past two and a half centuries more than almost a million died for the principles of the Constitution. Are we going to throw it away on some elected official who has an "idea (poor thing it must be lonely)?"
[ link to this | view in chronology ]
[ link to this | view in chronology ]
"You lousy cork-soakers. You have violated my farging rights. Dis somanumbatching country was founded so that the liberties of common patriotic citizens like me could not be taken away by a bunch of fargin iceholes... like yourselves." -- Johnny Dangerously (1984)
[ link to this | view in chronology ]
[ link to this | view in chronology ]
This could (but probably won't) be done well
[ link to this | view in chronology ]
what?
There actually are some pretty easy answers! Anonymity is actually one of them. And additionally, there is till a limit on how safe anyone can be anyways. You could die sitting right where you are by home invasion from criminals or some hot SWATTING brought to you by a corrupt police dept near you!
The founders knew what was going on, stand up for what you believe in or just shut up and lose your voice. Anyone at anytime could become unreasonably hostile to anything you say because that is just life.
And at long as we expect everyone else like corporations and the government to keep use safe you become nothing more than a kept hamster worthy of no safety at all.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
It requires a mature intellect to identify "bullying", and even then, it will very often be contentious.
NLP (Natural Language Processing) seems to currently have the "intelligence" of about a 5 year old.
I can't see this going anywhere.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Quick decide that your patent is the solution to all of the worlds ills and cash in, cause a bunch of problems, and walk away.
If one were to look at a majority of my online interactions with that Adam Steinbaugh fellow without the correct frame of reference, I'm a huge bully picking on poor Adam. Except he has tools to not see what I say, doesn't have to reply, and he is pretty much in on the joke.
I've been accused, more than once, of bullying lawyers online. Overwrought filings with courts accuse me of mental illness, because I think they are a joke.
We have been making the world to soft and fluffy to "protect" the children. We've seen stories where the media loves to play up the "bullying" aspect... but a saying a child looked fat once isn't really bullying.
Humans LOVE to stick everything into clearly labeled boxes, and we'll expand what the label covers to keep it easy to sort. So an online shouting match between the old gf and the new gf (and she dated him first for 2 whole weeks) is considered the same as a child who is the target of a malicious group who bury her in negative attention.
Once upon a time, the parent of the aggrieved would call the other kids parents and hash it out... now it's a matter for the authorities. Some parents are completely clueless about how their kid behaves online, because they assume the world will watch out for them and keep them safe (and not being the evil bastards they can be).
Perhaps we should spend much less time looking for a technical solution to a failure to raise kids. Many parents are failing their kids, because being a parent isn't something we require them to do. I'm sorry my kid yelled at your kid, but you understand your kid hit him first. More often than not everyone is a special innocent child who did nothing to incite what happened... and with no adult to talk to when things spin out of control... it gets worse.
[ link to this | view in chronology ]
Effective moderation requires a willingness to enforce it; I've been in situations where the theory and practice differed wildly: people don't like laying down the banhammer on people they are friendly with or intimidated by.
It's true that you can't legislate better attitudes but I'm very glad to see nuance in this article and hope that better minds than mine can come up with a more effective solution than "Censorship," "Sod off, then," or "Suck it up," which is what we have now.
[ link to this | view in chronology ]
What is bullying?
And what happens if the "bullied" person goes along with the aggressive debate, but the automated system flags the comments as bullying? In other words, it doesn't account for think-skinned people.
Or what if you and I don't think a comment is bully comment, but the automated system does? So now the system is being too thin-skinned.
So like one of the commenters says, the researchers should go back to their labs until the "close enough" system can take every situation into account.
[ link to this | view in chronology ]