Facebook’s AI detects 95% of hate speech that is removed from its platform And Facebook moderators say company risked their lives, forcing them back to office

Facebook’s software systems are constantly improving to detect and block hate speech on the Facebook and Instagram platforms. But Facebook’s artificial intelligence software is still struggling to capture certain content that breaks the rules. For example, it is more difficult to grasp the meaning of images with text overlaid on them, also in cases where sarcasm or slang is used. In many of these cases, humans are quickly able to determine whether the content in question violates Facebook rules. And several of those moderators warn that Facebook is putting them in unsafe working conditions.

About 95% of hate speech on Facebook is detected by algorithms before anyone can report it, Facebook said in its latest Community Standards Enforcement report. The remaining 5% of the roughly 22 million messages reported in the last quarter were notified by users. This report also examines a new measure of hate speech: prevalence. Basically, to measure prevalence, Facebook takes a sample of content and then finds out what frequency the thing being measured (in this case hate speech) is perceived as a percentage of the content viewed. Between July and September of this year, the figure was between 0.10 and 0.11%, or about 10 11 visits out of 10,000.

One of the main goals of Facebook’s AI is to deploy cutting-edge machine learning technology to protect people from harmful content. With billions of people using our platforms, we rely on AI to stretch our content review work and automate decisions where possible. Our goal is to quickly and accurately capture hate speech, news and other forms of policy-violating content, for every form of content and for every language and community around the world, said Mike Schroepfer, Chief Technology Officer of Facebook.

Facebook said it has recently deployed two new artificial intelligence technologies to help it meet these challenges. The first is called the Reinforced Integrity Optimizer (RIO), which learns from real examples and measurements online rather than from an offline dataset. The second is an artificial intelligence architecture called “Linformer,” which allows Facebook to use complex models of language comprehension that were previously too large and “unwieldy” to work at scale. We are now using RIO and Linformer in production to analyze content from Facebook and Instagram in different regions of the world, Schroepfer said.

Facebook also said it has developed a new tool for detecting deepfakes and has made some improvements to an existing system called SimSearchNet, which is an image matching tool designed to represent misinformation on its platform.

Taken together, all of these innovations mean that our AI systems have a deeper and broader understanding of content. They’re better suited to what people currently share on our platforms, which allows them to adapt more quickly when a new photo appears and spreads, Schroepfer said.

Facebook also pointed out that while its internal artificial intelligence system is making progress in several categories of content application, the COVID-19 pandemic is having a continuing effect on its ability to moderate content. As the COVID-19 pandemic continues to disrupt our content review staff, we are seeing some application settings revert to pre-pandemic levels. Even with reduced review capacity, we continue to prioritize the most sensitive content people can review, which includes areas like suicide, self-harm and child nudity, the company said.

A second-hand team

Reviewers are critical, said Guy Rosen, vice president of Facebook integrity. People are an important part of the equation for applying content. They are incredibly important workers who do an incredibly important part of the job, he said. Full-time Facebook employees who are employed by the company itself are told to work from home until July 2021 or perhaps even permanently.

Rosen pointed out that Facebook employees who have to come to work physically, such as those who manage essential functions in data centers, are handled with strict safety precautions and personal protective equipment, such as hand sanitizer, made available. Moderation, Rosen explained, is one of those jobs that can’t always be done at home. Some content is just too sensitive to review outside of a dedicated workspace where other family members can see it. Clarifying that some Facebook content moderators are brought back to offices to make sure we can have that balance between people and AI working on those areas that require the application of human judgment.

However, the majority of Facebook content moderators do not work for Facebook. They work for third-party contractors around the world, often with woefully insufficient support to do their jobs. Earlier this year, Facebook agreed to a $ 52 million settlement in a class action lawsuit brought by former content editors who claimed the work gave them post-traumatic stress disorder.

All of this was before COVID-19 spread around the world. Faced with the pandemic, the situation seems even worse. Last Wednesday, more than 200 Facebook moderators said in an open letter to CEO Mark Zuckerberg that the company needlessly risked their lives by forcing them back to the office during the coronavirus pandemic; and this without even providing for a risk premium for the workers who are ordered to return to the office.

In addition to psychologically toxic work, hanging on to work means entering a sensitive area. In several offices, multiple cases of COVID have occurred in the field. The workers have asked the leadership of Facebook and that of your outsourcing companies like Accenture and CPL, to take urgent action to protect us and value our work. You refused. We are publishing this letter because we have no choice, the letter reads.

This raises a difficult question. If our work is so essential to the business of Facebook that you will ask us to risk our lives for the sake of the Facebook community, and for profit, aren’t we, in fact, the heart of your business? , adds the letter.

Surveillance is intensifying

Meanwhile, surveillance of Facebook by states and federal authorities continues to intensify. This week, Mark Zuckerberg, the company’s CEO, testified before Snat for the second time in just three weeks. Members of the House also complain that Facebook has failed to moderate content properly or securely amid widespread election news.

Many antitrust investigations that began in 2019 are coming to an end, media reports say. The Federal Trade Commission reportedly intends to file a complaint within the next two weeks, and a coalition of nearly 40 states, led by New York Attorney General Letitia James, is expected to follow in December. These lawsuits could argue that Facebook is unfairly beating competition through its acquisition and data strategies, and that it could end up trying to force the company to shy away from Instagram and WhatsApp.

Source : Facebook (1, 2)

And you ?

What do you think ?

See as well :

49% of Facebook employees don’t believe the company has had a positive impact on the world. Despite the reinforcement of its policies of verifying the facts and

Covid-19: Facebook allows its employees to work from home until 2021. And allocates them $ 1,000 to cover needs related to the home office

A former Facebook moderator files a complaint for post-traumatic stress disorder. After being exposed to shocking content

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.