Facebook gives a first estimate of the place of hate speech on its platform

How prevalent is online hate on social media? While those responsible for these platforms are under pressure and are the subject of multiple debates on their strategy to stem this scourge, Facebook has published, Thursday 19 November and for the first time, an estimate of the extent of this type of content on its platform.

Between July and September, for 10,000 views of content of all types by users of its network around the world (the same content can be seen several times), between 10 and 11 contents were subsequently considered as hateful (racism, sexism, anti-Semitism …) and contrary to its internal regulations – although having escaped the automatic moderation of Facebook.

For comparison, these are twice the figures for content viewed on Facebook that contains photos of sex or naked people. In other words: hateful messages, videos and posts on Facebook represent twice as many views by users of the social network than content containing nudity.

22.1 million moderate hate content in three months

The Californian company gives such estimates for the first time as part of the publication of its so-called “transparency” semi-annual report on its moderation policy. In all, 22.1 million hateful content was moderated between July and September on Facebook. On Instagram, they number 6.5 million. If these figures are stable compared to the previous quarter on Facebook, they have doubled on Instagram.

Read also Moderation, transparency, advertising: substantive avenues to better regulate social networks

With the public presentation of a new figure of “prevalence of hate” supposed to describe the place of this type of message on its network, Facebook seems to want to show that the phenomenon is not as endemic as one would like to believe. However, it remains impossible for outside observers to verify this percentage (“Between 0.10 and 0.11% of viewing being hate speech”, written Facebook), moreover difficult to understand and handle.

Thus, despite a seemingly small percentage, the size of the Facebook network – with its more than 2 billion users – and the astronomical number of comments, images, links and videos posted there each day can be reflected, despite all, by an absolute very large amount of hateful content.

In the previous quarter, Facebook was very satisfied with the progress made by its artificial intelligence programs to detect hate content and remove it. The trend seems to be confirmed: during the last quarter, 95% of moderate hate content on Facebook and Instagram was after an automatic evaluation (analysis of keywords, images, etc. by software) and without prior notification by a user. This proportion is, for Facebook, comparable to that of the previous quarter. On Instagram, she has progressed by ten points.

Respond to the pressure

These figures show “The progress we have made in the fight against hate speech”, welcomed Guy Rosen, one of the executives of Facebook, responsible for the “integrity” of the platform, during a press conference at which The world was able to attend, Thursday. The latter explained that the social network had calculated the proportion of hateful content on its platform using a representative sample of content, in the manner of a “Air pollution analysis”.

This intensive use of artificial intelligence over the past two years, including to make decisions about hateful content where the demarcation between necessary moderation and freedom of expression is sometimes difficult to place, did not accompany an increase in the calls that Facebook users can make if they believe that one of their content has been deleted by mistake. A trend that was confirmed during the last quarter.

This use of automatic moderation technologies also allows Facebook to respond to growing pressure from states asking it to remove more problematic content or hate speech. Facebook also wants to pose as a leader in moderation technologies, to the extent that several legislative texts, in France and at the level of the European Union, want to impose obligations on social networks in this area. During the press conference, Monika Bickert, responsible for moderation policy at Facebook, welcomed the fact that Facebook is the “First company to publish this type of data”.

Article reserved for our subscribers Read also Cyberbullying: “On social networks, moderation requires much greater human resources”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.