Grammarly, the popular writing assistant, is facing scrutiny over its modern “Expert Review” feature, which offers feedback “inspired by” prominent figures – including some who never consented to having their work used in this way. The feature, launched in August 2025, aims to provide users with writing advice through the lens of industry experts, but the implementation has sparked concerns about identity appropriation and the ethical implications of AI-driven content creation.
The controversy came to light when several journalists discovered their names and likenesses were being used by Grammarly’s AI without permission. The feature not only draws on the work of established authors like Stephen King and deceased academics like Carl Sagan, but as well appears to mimic the writing styles of current professionals, raising questions about how the company is sourcing and utilizing data to power its AI tools.
AI Mimicry and Unsolicited “Expert” Input
The “Expert Review” feature analyzes user writing and generates suggestions based on the perceived styles of selected experts. Yet, reports indicate the feature is prone to inaccuracies and misleading presentations. According to reports, the AI-generated feedback included comments appearing to be from The Verge’s editor-in-chief, Nilay Patel, as well as other editors at the publication – all without their knowledge or consent. Numerous other tech journalists, including those formerly and currently at publications like Wired, Bloomberg, The New York Times, and PC Gamer, were also identified as “experts” within the system.
The way these suggestions are presented can be particularly confusing. In Google Docs, the AI’s feedback appears similar to comments from real users, potentially leading writers to believe they are receiving direct edits from the imitated expert. One example highlighted a suggestion “inspired by” a The Verge senior editor that contradicted the editor’s known preferences for concise writing, demonstrating the limitations of AI in replicating nuanced editorial judgment.
Data Sourcing and Accuracy Issues
Grammarly’s parent company, Superhuman, defended the feature, stating that the “experts” are included due to the fact that their published works are publicly available and widely cited. Alex Gay, vice president of product and corporate marketing at Superhuman, explained that the AI doesn’t claim endorsement or direct participation from these experts, but rather provides suggestions “inspired by” their work. However, this explanation has done little to quell concerns about the lack of transparency and consent.
Further investigation revealed that the feature’s “sources” are often unreliable. The feature frequently crashed, and when users attempted to verify the AI’s suggestions, they were often directed to spammy websites or archived copies of pages that weren’t the original source material. In some cases, the links led to content unrelated to the expert whose work the suggestion was supposedly based on, suggesting the AI may be misattributing ideas or drawing from incorrect sources. The Verge reported that these inaccuracies raise questions about the quality and reliability of the AI-generated feedback.
Legal and Ethical Implications
The employ of individuals’ identities without consent raises significant legal and ethical questions. Experts are concerned about potential violations of right of publicity, identity theft, and the broader implications of enterprise AI deployment. The incident could trigger regulatory scrutiny and potential lawsuits as AI tools continue to blur the lines between personhood and digital identity. TechBuzz highlights that this situation could reshape how enterprise AI tools operate, forcing companies to address consent and identity rights more proactively.
The situation underscores a growing trend of AI companies leveraging existing content and data without adequately addressing the rights of creators. As AI models become more sophisticated, the need for clear guidelines and regulations regarding data sourcing, consent, and attribution becomes increasingly urgent. Paste Magazine notes that this incident is part of a larger pattern of AI companies facing accusations of plagiarism and unauthorized use of human labor.
What’s Next for Grammarly and AI-Driven Writing Tools?
Grammarly has not yet announced any changes to the “Expert Review” feature, but the backlash from journalists and experts is likely to put pressure on the company to address the concerns raised. The incident serves as a cautionary tale for other AI developers, highlighting the importance of obtaining consent and ensuring accuracy when utilizing individuals’ work to train AI models. The future of AI-driven writing tools will likely depend on establishing clear ethical guidelines and legal frameworks that protect the rights of creators and ensure responsible AI development.
What are your thoughts on Grammarly’s “Expert Review” feature? Share your opinions in the comments below.