Grammarly, the popular writing assistance platform, is facing a class action lawsuit alleging the unauthorized use of prominent writers’ and journalists’ identities in its recently launched “Expert Review” feature. The suit, filed Wednesday in the Southern District of New York, centers on claims that Grammarly leveraged the names and reputations of individuals without their consent to enhance its artificial intelligence-powered editing suggestions. This controversy highlights growing concerns about the ethical implications of AI and the protection of personal identity in the age of generative technology.
The lawsuit was brought by investigative journalist Julia Angwin, founder of The Markup, a nonprofit news organization focused on the impact of technology. Angwin discovered her name was being used by Grammarly’s AI to provide editorial feedback, a practice she claims violates privacy and publicity rights. The complaint argues that Superhuman, Grammarly’s parent company, misappropriated the identities of hundreds of writers, authors, and editors for commercial gain, potentially causing damages exceeding $5 million.
The “Expert Review” tool, designed to offer users insights from thought leaders, presented AI-generated suggestions as if they originated from real individuals. According to The Verge, Casey Newton, a tech journalist, first brought the issue to light after discovering his own identity was being used. Other journalists at The Verge, including editor-in-chief Nilay Patel, were also found to be associated with the AI-generated advice. The feature even included the names of well-known figures like Stephen King and Neil deGrasse Tyson.
Superhuman announced on Wednesday that it would be disabling the “Expert Review” feature, acknowledging the backlash. Ailian Gan, Superhuman’s director for product management, stated the company “clearly missed the mark” and is “reimagining the feature to make it more useful for users, while giving experts real control over how they aim for to be represented—or not represented at all.” The company initially launched an email inbox allowing experts to request removal from the tool, but the lawsuit preceded that effort.
Concerns Over AI and Identity
The core of the legal challenge revolves around the unauthorized use of individuals’ identities for commercial purposes. The lawsuit alleges a violation of publicity rights, which protect individuals from having their name or likeness used for advertising or commercial gain without their permission. PRF Law, the firm representing Angwin, argues that Grammarly’s actions constitute misappropriation, causing potential harm to the reputations and professional standing of those whose identities were used.
This case arrives at a critical juncture as AI-powered tools become increasingly integrated into daily life. The incident raises questions about the responsible development and deployment of AI, particularly regarding the use of personal data and the potential for misrepresentation. The ease with which AI can now mimic human voices and styles underscores the need for clear ethical guidelines and legal frameworks to protect individuals from unauthorized use of their identities.
Superhuman’s Response and Future Plans
Shishir Mehrotra, CEO of Superhuman, explained the initial intent behind the “Expert Review” feature, stating it was “designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans.” Though, Mehrotra acknowledged the company’s failure to adequately address concerns about consent and control. SFGate reported on Mehrotra’s apology and commitment to a revised approach.
The company’s decision to halt the feature suggests a recognition of the legal and reputational risks associated with its previous implementation. While Superhuman intends to revisit the concept, the future of “Expert Review” remains uncertain. The outcome of the lawsuit will likely shape how AI companies approach the use of individual identities in their products and services.
The incident with Grammarly’s “Expert Review” serves as a cautionary tale for the tech industry. As AI continues to evolve, ensuring transparency, obtaining informed consent, and respecting individual rights will be paramount to building trust and fostering responsible innovation. The legal proceedings initiated by Julia Angwin will undoubtedly contribute to the ongoing conversation about the ethical boundaries of AI and the protection of personal identity in the digital age.
What comes next will depend heavily on the court’s decision and how Superhuman adapts its AI development practices. The case is likely to set a precedent for how companies can ethically and legally leverage the expertise and reputations of individuals in their AI-powered products. Share your thoughts on the implications of this lawsuit in the comments below.