Meta, the parent company of Facebook and Instagram, is once again considering the implementation of facial recognition technology, this time integrated directly into its smart glasses. The move, reported internally as “Name Tag,” raises serious privacy concerns and echoes past missteps that have already cost the company billions in settlements. While Meta frames the technology as a way to identify people and provide information through its AI assistant, the potential for misuse and the lack of clear consent mechanisms are deeply troubling.
The company’s internal discussions, revealed in a recent report, even suggest a calculated strategy to launch the feature during a period of heightened political activity, hoping to minimize scrutiny from civil society groups. This approach underscores a troubling disregard for ethical considerations and a willingness to prioritize profit over privacy. The core issue isn’t simply whether Meta can build this technology, but whether it should, given its history and the inherent risks to individual liberties.
Facial recognition technology, particularly when deployed in a wearable and ubiquitous form factor like smart glasses, presents a unique set of dangers. Unlike traditional social media platforms where users actively upload photos, these glasses could passively collect biometric data – essentially a “faceprint” – from anyone within view, without their knowledge or consent. This raises the specter of mass surveillance and the potential for discrimination, as well as the risk of data breaches exposing sensitive biometric information.
The technical challenges of obtaining informed consent from every individual captured by the glasses are insurmountable. Meta cannot realistically ask for permission from every passerby. This is particularly problematic given that many individuals may not even be Meta users, and therefore have no existing relationship with the company or its privacy policies. Dozens of state laws already recognize the sensitivity of biometric data, requiring affirmative consent for its collection and processing.
A History of Privacy Violations and Costly Settlements
Meta’s renewed interest in facial recognition comes despite a well-documented history of privacy violations and substantial financial penalties. In November 2021, the company announced it would discontinue a tool that scanned faces in photos posted to its platform, deleting over a billion face templates in the process. This decision followed years of criticism and mounting legal pressure.
Prior to that, in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission (FTC) for $5 billion. The FTC’s allegations included deceptive and confusing face recognition settings, leading to a requirement for explicit user consent before implementing the technology. You can find details of the FTC settlement here.
The legal challenges didn’t end there. In March 2021, Meta (then Facebook) agreed to a $650 million class action settlement with Illinois consumers, stemming from a lawsuit under the state’s Biometric Information Privacy Act (BIPA). More recently, in July 2024, the company paid $1.4 billion to settle claims that its previous face recognition system violated Texas law. These settlements total over $7 billion, a clear indication of the financial risks associated with deploying this technology without robust privacy safeguards.
Echoes of Past Concerns and Emerging Risks
The company’s internal memo, suggesting a launch during a “dynamic political environment,” is particularly alarming. It reveals a cynical calculation to exploit periods of distraction to avoid public backlash. This strategy is not only ethically questionable but similarly demonstrates a fundamental misunderstanding of the growing public awareness surrounding privacy and surveillance.
Concerns extend beyond individual privacy to broader societal implications. The public has already expressed strong opposition to similar technologies used by law enforcement, such as the Mobile Fortify app used by immigration agents, which allows for facial recognition scanning via smartphones. Similarly, Amazon Ring faced significant criticism when it was revealed that a feature marketed for finding lost pets could potentially be repurposed for mass biometric surveillance.
These examples demonstrate a growing public resistance to invasive surveillance technologies. Civil liberties groups, like the Electronic Frontier Foundation (EFF), and plaintiffs’ attorneys are prepared to challenge Meta’s plans, and privacy regulators and attorneys general are urged to investigate.
What’s Next for Facial Recognition and Privacy?
Meta’s consideration of facial recognition in its smart glasses is a pivotal moment. The company’s decision will not only shape the future of its own products but also set a precedent for the broader tech industry. The potential for widespread adoption of this technology, coupled with the inherent privacy risks, demands careful scrutiny and robust regulation. It remains to be seen whether Meta will prioritize ethical considerations and user privacy, or repeat the mistakes of the past.
What are your thoughts on the employ of facial recognition technology? Share your opinions in the comments below, and let’s continue the conversation.