Facebook today published its latest , the first of which it released last May. As in previous editions, the Menlo Park company tracked metrics across a number of policies — bullying and harassment, child nudity, global terrorist propaganda, violence and graphic content, and others — in the previous quarter (January to March), focusing on the prevalence of prohibited content that made its way onto Facebook and the volume of this content it successfully removed.

AI and machine learning helped cut down on abusive posts a great deal, according to Facebook. In six of the nine areas tracked in the report, the company says it proactively detected 96.8% of the content it took action on before a human spotted it (compared with 96.2% in Q4 2018).

Read More At Article Source | Article Attribution