A ContentID For Online Bullying? What Could Possibly Go Wrong...

from the let's-think-this-through dept

Let me start out by saying that I think online harassment and bullying is a significant problem -- though also one that is often misrepresented and distorted. I worry about the very real consequences of those who are bullied, harassed and threatened online, in that it can often lead to silencing voices that need to be heard, or even causing some to not even bother to participate for fear of the resulting bullying. That said, way too frequently, it seems that those who are speaking out about online bullying assume that the best way to deal with this is to move to push for censorship as the solution. This rarely works. Too frequently we see "cyberbullying" being used as a catchall for attacking speech people simply do not like. Even here at Techdirt, people who dislike our viewpoint will frequently claim that we "bullied" someone, merely for pointing out and discussing statements or arguments that we find questionable.

There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds -- though I think it's an important goal to strive for. But I fear the seemingly simple idea of "silence those accused of bullying" will have incredibly negative consequences (with almost none of the expected benefits). We already see many attempts to censor speech that people dislike online, with frequent cases of abusive copyright takedown notices or bogus claims of defamation. Giving people an additional tool to silence such speech will be abused widely, creating tremendous damage.

We already see this in the form of ContentID from YouTube. A tool that was created with good intent, to deal with copyright infringement on the site, is all too often used to silence speech on the site, either to silence a critic or just through over-aggressive robots.

So, imagine what a total mess it would be if we had a ContentID for online bullying. And yet, it appears that the good folks at SRI are trying to build exactly that. Now, SRI certainly has led the way with many computing advancements, but it's not clear to me how this solution could possibly do anything other than create new headaches:
But what if you didn’t need humans to identify when online abuse was happening? If a computer was smart enough to spot cyberbullying as it happened, maybe it could be halted faster, without the emotional and financial costs that come with humans doing the job. At SRI International, the Silicon Valley incubator where Apple’s Siri digital assistant was born, researchers believe they’ve developed algorithms that come close to doing just that.

“Social networks are overwhelmed with these kinds of problems, and human curators can’t manage the load,” says Normal Winarsky, president of SRI Ventures. But SRI is developing an artificial intelligence with a deep understanding of how people communicate online that he says can help.
This is certainly going to sound quite appealing to those who push for anti-cyberbullying campaigns. But, at what cost? Again, there are legitimate concerns about people who are being harassed. But one person's cyberbullying could just be another person's aggressive debate tactics. Hell, I'd argue that abusing tools like contentID or false defamation claims are a form of "cyberbullying" as well. Thus, it's quite possible that the same would be true of this new tool, which can be used to "bully" those the algorithm decides is bullying as well.

Determining copyright infringement is already much more difficult than people imagine -- which is why ContentID makes so many errors. You have to take into account context, fair use, de minimis use, parody, etc. That's not easy for a machine. But at least there are some direct rules about what truly is "copyright infringement." With "bullying" or "harassment," there is no clear legal definition to match up to and it's often very much in the eye of the beholder. As such, any tool that is used to "deal" with cyberbullying is going to create tremendous problems, often just from misunderstandings between multiple people. And that could create a real chilling effect on speech.

Perhaps instead of focusing so much technical know-how on "detecting" and trying to "block" cyberbullying, we should be spending more time looking for ways to positively reinforce good behavior online. We've built up this belief that the only way to encourage good behavior online is to punish bad behavior. But we've got enough evidence at this point showing how rarely this actually works, that it seems like perhaps it's time for a different approach. And a "ContentID for harassment" seems unlikely to help.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: automation, bullying, contentid, cyberbullying, harassment
Companies: sri


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Graham J (profile), 14 Jul 2015 @ 8:52am

    Think of the children

    Agreed. Any good parent knows that positive reinforcement should be employed much more than threats and punishment when teaching children (and animals, for that matter) good behaviour.

    In a very real sense people are being childish when they lash out on the internet - rationality goes out the window - so better to use these techniques for them too.

    link to this | view in thread ]

  2. identicon
    Michael, 14 Jul 2015 @ 8:55am

    researchers believe they’ve developed algorithms that come close

    Great job guys! When "Close" becomes "Nailed It!", please let the internet know. Until then, please keep your incredibly bad idea to yourselves.

    link to this | view in thread ]

  3. icon
    lars626 (profile), 14 Jul 2015 @ 9:11am

    On the other hand ...

    All these content-ID schemes sound great. But like many solutions, especially when using software, they don't create a complete solution. They work to shutdown the bad, sometimes, and take a lot of the good with it.

    When they do take down the wrong things there Must be a way to disagree and override the take down. The current YouTube system is not adequate. The should also be a way for the operators to lock out repeat offenders, including an appeal process if they disagree.

    Any system that has an automated take down should have shield setting. This would prevent content that has been determined to be 'acceptable' from being taken down automatically. This would cover fair use or repeated bogus take downs on content that someone finds disagreeable.

    This is not a problem that will be quickly solved, if ever. What I don't understand is why Google has not made improvements in YouTube. They must not be making any money off the thing and have a tight budget.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 14 Jul 2015 @ 9:28am

    So, in an effort to prevent the highly visible bullying of the trolls, build a system that supports the more subtle bullying of those who would impose their tastes and morals on society. What a brilliant idea, no doubt put up by those who would censor communications, for societies good of course.

    link to this | view in thread ]

  5. identicon
    Michael, 14 Jul 2015 @ 9:32am

    Re: On the other hand ...

    When they do take down the wrong things there Must be a way to disagree and override the take down.

    And that right there is the backward thinking that is the problem. People think it is acceptable to block or take down content that isn't illegal as long as there is a way to get it back.

    That is NOT OK.

    In addition, anyone claiming that have a system that supposedly can identify illegal content is simply lying. Much of this content cannot be identified as illegal until there has actually been a court ruling. Anything that takes the content down and then allows it to be restored after a ruling is effectively locking people in prison until there is a trial to determine if they are guilty.

    link to this | view in thread ]

  6. icon
    Groaker (profile), 14 Jul 2015 @ 9:43am

    An introductory course in computability might be useful for all these la la land ideas, though I have little reason to believe that it would be neither be taken nor understood.

    Over the past two and a half centuries more than almost a million died for the principles of the Constitution. Are we going to throw it away on some elected official who has an "idea (poor thing it must be lonely)?"

    link to this | view in thread ]

  7. identicon
    aerilus, 14 Jul 2015 @ 9:48am

    give a mouse a cookie.............

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 14 Jul 2015 @ 10:17am

    Re:

    That's what "safe spaces" are all about.

    link to this | view in thread ]

  9. identicon
    Anonymous Coward, 14 Jul 2015 @ 11:56am

    I can see it now, someone quotes a movie, tv, et al and is arrested for harassment. Ex:
    "You lousy cork-soakers. You have violated my farging rights. Dis somanumbatching country was founded so that the liberties of common patriotic citizens like me could not be taken away by a bunch of fargin iceholes... like yourselves." -- Johnny Dangerously (1984)

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 14 Jul 2015 @ 12:59pm

    Of course those being paid by various government agencies to troll and defend whatever they say will be exempt from any such thing.

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 14 Jul 2015 @ 1:26pm

    This could (but probably won't) be done well

    Flag things that appear to be cyber bullying, but have it reviewed before being taken down. If it was properly reviewed, not the half-assed forms used for copyright, and I'd actually support this system.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 14 Jul 2015 @ 1:33pm

    what?

    There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds

    There actually are some pretty easy answers! Anonymity is actually one of them. And additionally, there is till a limit on how safe anyone can be anyways. You could die sitting right where you are by home invasion from criminals or some hot SWATTING brought to you by a corrupt police dept near you!

    The founders knew what was going on, stand up for what you believe in or just shut up and lose your voice. Anyone at anytime could become unreasonably hostile to anything you say because that is just life.

    And at long as we expect everyone else like corporations and the government to keep use safe you become nothing more than a kept hamster worthy of no safety at all.

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 14 Jul 2015 @ 2:33pm

    Worst idea ever.

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 14 Jul 2015 @ 3:19pm

    It is a much harder, more fraught, problem than contentID, simple voice recognition, or driverless cars etc.

    It requires a mature intellect to identify "bullying", and even then, it will very often be contentious.

    NLP (Natural Language Processing) seems to currently have the "intelligence" of about a 5 year old.

    I can't see this going anywhere.

    link to this | view in thread ]

  15. identicon
    Anonymous Coward, 14 Jul 2015 @ 5:39pm

    Re: On the other hand ...

    "All these content-ID schemes sound great. "

    Maybe to you, but not to a majority of those potentially affected.

    link to this | view in thread ]

  16. icon
    That Anonymous Coward (profile), 14 Jul 2015 @ 5:44pm

    Oh sweet baby FSM.

    Quick decide that your patent is the solution to all of the worlds ills and cash in, cause a bunch of problems, and walk away.

    If one were to look at a majority of my online interactions with that Adam Steinbaugh fellow without the correct frame of reference, I'm a huge bully picking on poor Adam. Except he has tools to not see what I say, doesn't have to reply, and he is pretty much in on the joke.

    I've been accused, more than once, of bullying lawyers online. Overwrought filings with courts accuse me of mental illness, because I think they are a joke.

    We have been making the world to soft and fluffy to "protect" the children. We've seen stories where the media loves to play up the "bullying" aspect... but a saying a child looked fat once isn't really bullying.

    Humans LOVE to stick everything into clearly labeled boxes, and we'll expand what the label covers to keep it easy to sort. So an online shouting match between the old gf and the new gf (and she dated him first for 2 whole weeks) is considered the same as a child who is the target of a malicious group who bury her in negative attention.

    Once upon a time, the parent of the aggrieved would call the other kids parents and hash it out... now it's a matter for the authorities. Some parents are completely clueless about how their kid behaves online, because they assume the world will watch out for them and keep them safe (and not being the evil bastards they can be).

    Perhaps we should spend much less time looking for a technical solution to a failure to raise kids. Many parents are failing their kids, because being a parent isn't something we require them to do. I'm sorry my kid yelled at your kid, but you understand your kid hit him first. More often than not everyone is a special innocent child who did nothing to incite what happened... and with no adult to talk to when things spin out of control... it gets worse.

    link to this | view in thread ]

  17. identicon
    Wendy Cockcroft, 15 Jul 2015 @ 7:36am

    The germ of the answer is in the last paragraph: how can we encourage people to behave better online of their own free will?

    Effective moderation requires a willingness to enforce it; I've been in situations where the theory and practice differed wildly: people don't like laying down the banhammer on people they are friendly with or intimidated by.

    It's true that you can't legislate better attitudes but I'm very glad to see nuance in this article and hope that better minds than mine can come up with a more effective solution than "Censorship," "Sod off, then," or "Suck it up," which is what we have now.

    link to this | view in thread ]

  18. identicon
    kog999, 17 Jul 2015 @ 8:51am

    Re:

    why do we need a computer to identify the bullying? or a paid staff to review and identify? can't the victim just press the block button?

    link to this | view in thread ]

  19. icon
    John85851 (profile), 17 Jul 2015 @ 10:10am

    What is bullying?

    I think this is the question that needs to be answered first. Like the article says, one person's "bullying" could be someone else's aggressive debating.

    And what happens if the "bullied" person goes along with the aggressive debate, but the automated system flags the comments as bullying? In other words, it doesn't account for think-skinned people.
    Or what if you and I don't think a comment is bully comment, but the automated system does? So now the system is being too thin-skinned.

    So like one of the commenters says, the researchers should go back to their labs until the "close enough" system can take every situation into account.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.