Home » Meta Admits Child Harm Inevitable on Facebook & Instagram: Zuckerberg Testimony

Meta Admits Child Harm Inevitable on Facebook & Instagram: Zuckerberg Testimony

by

Santa Fe, N.M. – Meta CEO Mark Zuckerberg testified that some level of criminal behavior is “inevitable” on the company’s platforms, including Facebook and Instagram, during a trial in New Mexico focused on allegations that the social media giant failed to protect children from harm. The admission came during a deposition presented Tuesday as part of a case brought by New Mexico Attorney General Raul Torrez, who alleges Meta knowingly enabled predators and prioritized user engagement over child safety.

Zuckerberg, along with Instagram head Adam Mosseri, faced questioning regarding the prevalence of harmful content, including child sexual exploitation and the potential for mental health detriments, on Meta’s platforms. “I just think if you’re serving billions of people, the unfortunate reality is that some very small percent of them are going to be criminals, and we should work as hard as we can to stop that activity from happening,” Zuckerberg said in the taped deposition, according to a report from The Guardian. “I don’t think that the standard for our platforms would be that you should assume that it will ever be perfect.”

Prosecutors presented evidence indicating that Meta estimated in 2020 that approximately 500,000 children were receiving sexually inappropriate communications on Instagram daily, including grooming attempts. A Meta spokesperson countered that the technology used at the time to generate that estimate was overly broad, and included interactions that were not actually inappropriate. The company identified its “People You May Know” algorithm as a key factor in facilitating these interactions, accounting for 79% of identified cases in 2018.

The trial as well revealed internal concerns about the impact of encryption on child safety. Zuckerberg defended his decision to authorize conclude-to-end encryption for Facebook Messenger in 2023, despite warnings from child safety groups Thorn and the National Center for Missing and Exploited Children (NCMEC) that it could shield predators. He stated that user privacy was a more pressing concern, explaining that encryption prevents Meta from accessing the content of messages. Meta maintains it can still review and act on reported encrypted messages.

Mosseri, in his deposition, highlighted the company’s efforts to identify and prevent potentially harmful interactions. He stated that Meta has “developed technology that allows us to find accounts that have shown potentially suspicious behavior…and to stop those accounts from interacting with young people’s accounts.” The company reported identifying over 265 million Facebook accounts and 135 million Instagram accounts exhibiting potentially suspicious behavior in 2025 and proactively blocking them from connecting with teens.

However, internal documents presented at trial indicated shortcomings in these safety measures. An internal presentation revealed that Instagram’s wellbeing safety team did not consistently prevent teen accounts from being recommended to potential violators. A December 2022 audit showed that Meta continued to recommend minor accounts to adults.

Meta introduced “Teen Accounts” in September 2024, automatically placing users under 18 into stricter privacy settings, including private profiles and limited messaging options. Researchers have identified vulnerabilities in these protections, including potential exposure to harmful content through hashtags and instances where safety features failed to function as intended.

Mosseri acknowledged the inherent challenges of maintaining safety on a platform with billions of users. “I certainly want to address any problem that’s even remotely as severe as something like sexual solicitation…Any negative action that happens offline, also to a certain degree, happens online,” he said. “We’re connecting billions of people. That is going to mean good and bad things happen.”

The trial, which began in early February, is expected to continue for several weeks. Meta maintains it has invested billions in safety measures and proactively removes violating content, even as acknowledging that no system can be perfect.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.