The detection of "hate speech" with mostly automated machine learning or AI techniques (the only practical, scalable way to do this) , requires the problem to be learnable. If a group of reasonable human evaluators cannot substantially agree on what constitutes hate speech, then predictive models cannot be built to do so.
The author inadvertently provided a perfect example at the end of the article:
If a company is going to proactively protect sexes and races, it's inevitably going to have to stand up for white men, even if the general feeling is white men are in no need of extra protection.
I do not believe for a moment our author intended this statement to be racist or sexist, but I'm certain many intelligent people would disagree.
If intelligent, well-meaning people cannot agree, how is a machine learning model going to sort it out?
/div>
Techdirt has not posted any stories submitted by dominic.
Detecting hate speech
The detection of "hate speech" with mostly automated machine learning or AI techniques (the only practical, scalable way to do this) , requires the problem to be learnable. If a group of reasonable human evaluators cannot substantially agree on what constitutes hate speech, then predictive models cannot be built to do so.
The author inadvertently provided a perfect example at the end of the article:
I do not believe for a moment our author intended this statement to be racist or sexist, but I'm certain many intelligent people would disagree.
If intelligent, well-meaning people cannot agree, how is a machine learning model going to sort it out?
/div>Techdirt has not posted any stories submitted by dominic.
Submit a story now.