Reading Time: 2 minutes

Facebook’s AI engineers have introduced a new algorithm that can adapt faster to challenges like spotting new forms of hate speech.
The social network observed that the number of items removed for breaching its rules in hate speech skyrocketed. Although these systems’ accuracy remains a mystery, Facebook tests systems extensively to avoid incorrect penalization of innocent content.

The company removed about 9.6 million pieces of content containing hate speech in the first quarter of 2020. Out of which, Facebook’s software detected 88.8 percent before user reporting. The algorithm helped identify 86% more hate speech content than the previous quarter.

Mike Schroepfer, Facebook chief technology officer, lauded the company’s ML technology that does syntactic analysis of the language and can detect not-so-obvious content.
COVID-19 restrictions have shut some moderation offices, which led to reducing the number of appeals lodged.

It’s hard to figure out how much hate speech slips through Facebook’s algorithm. An associate professor at Seattle University, Caitlin Carlson, experimented in which she, along with a colleague reporter more than 300 hate speech posts and noticed that Facebook ultimately removed only half of them. The algorithm removed content related to cases of racial and ethnic slurs as compared to those about misogyny.

A spokesperson from Facebook said that the company is slowly expanding the algorithm to cover more regions and languages and to figure out nuances for individual language.

Automating the process of defining and detecting hate speech is tricky because AI is a long way from understanding texts the way a human can.

Schroepfer claimed that on applying ML software to language with help from recent research, Facebook had upgraded its hate speech detection algorithm.

The company has even announced an award of $100,000 in prizes to research groups that can create open-source software to spot hateful memes effectively.

It is clear that technology is going to depend on human intervention for the indefinite future despite automating this one aspect.

Source

#AIMonks #AI #Algorithms #OpenSource #Software #Facebook #HateSpeech #EthnicSlurs #Misogyny #ML #MachineLearning #Technology #Speech #Language

Subscribe to AI Bytes

Join thousands of other data scientists and artificial intelligence enthusiasts

I will never give away, trade or sell your email address. You can unsubscribe at any time.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *