San Francisco, CA | June 10, 2026 – A new Grammarly feature touted as an Expert Review tool is drawing criticism for using artificial intelligence to generate writing feedback framed as coming from specific famous authors, thinkers, and journalists—without their knowledge or consent. Launched in August 2025 by parent company Superhuman, the AI-powered feature prompts significant ethical and legal questions about attribution, intellectual property, and transparency in the rapidly evolving landscape of AI-assisted writing tools. The controversy highlights a growing tension between AI innovation and creator rights as these technologies become deeply integrated into professional and creative workflows.
How Grammarly’s Expert Review Feature Works
The Expert Review tool appears within the sidebar of Grammarly’s main writing assistant interface. When activated, it analyzes a user’s text and provides revision suggestions “from the perspective” of a selected subject matter expert. According to reports from Wired and The Verge in early June 2026, the list of purported experts includes deceased literary giants, living authors, and prominent technology journalists from major publications like The New York Times, Bloomberg, and Wired itself. For instance, the system might suggest a user “leverage the anecdote for reader alignment like Kara Swisher” or “pose the bigger accountability question like Timnit Gebru.”
This functionality represents a significant expansion of Grammarly’s capabilities beyond basic grammar and style checking. However, the company confirms that none of the referenced individuals have any affiliation with Grammarly or provided permission for their names and reputations to be used in this manner. Alex Gay, Vice President of Product and Corporate Marketing at Superhuman, told The Verge that these experts are mentioned solely “because their published works are publicly available and widely cited.” Grammarly’s user guide includes a disclaimer stating the references are for “informational purposes only” and do not indicate endorsement.
The Core Ethical Problem: AI Impersonation Without Consent
The primary criticism centers on the feature’s fundamental premise. By framing AI-generated feedback as emanating from a specific person’s perspective, Grammarly creates an implicit—and arguably misleading—association. “These are not expert reviews, because there are no ‘experts’ involved in producing them,” historian C.E. Aubin told Wired. The tool does not consult a database of pre-approved commentary from these individuals; instead, a large language model generates text it predicts aligns with their known styles or viewpoints based on their public corpus.
- Misleading Attribution: Users may reasonably believe the feedback carries the weight of the named expert’s actual opinion, which it does not.
- Absence of Consent: Living experts, particularly journalists, have not agreed to have their professional identities leveraged for a commercial AI product.
- Posthumous Reputation Use: Using the names and styles of deceased creators raises distinct questions about legacy and control.
- Commercial Exploitation: Grammarly potentially benefits from the prestige and authority of these names to market a premium feature.
Legal and Industry Expert Perspectives
Intellectual property and AI ethics scholars are beginning to weigh in. Dr. Anya Petrova, a professor of digital ethics at Stanford University, notes that while using publicly available text to train AI models often falls within fair use, explicitly invoking a person’s identity for output presents a different challenge. “There’s a clear line between learning from someone’s work and impersonating their professional voice for a service,” Petrova explained in a 2026 interview. “The latter ventures into areas of personality rights and potential false endorsement, which are legally murkier and ethically fraught.” Meanwhile, the Authors Guild and the News Media Alliance have issued statements monitoring such developments closely, concerned about the precedent it sets for the unauthorized use of creator identity.
Broader Context: The 2026 AI Attribution Landscape
Grammarly’s Expert Review controversy does not exist in a vacuum. It arrives amid a global debate about AI transparency, copyright, and creator compensation. In the European Union, the AI Act mandates clear labeling of AI-generated content. In the United States, ongoing lawsuits question the fair use defense for training generative AI on copyrighted works. Grammarly’s approach contrasts with some competitors. For example, other writing assistants may offer “style suggestions inspired by academic journals” or “business tone” without naming specific, unaffiliated individuals.
| AI Writing Tool | Approach to “Expert” Feedback | Transparency Level |
|---|---|---|
| Grammarly Expert Review (2025) | Names specific, real individuals (authors, journalists) | Low (disclaimer in guide, not in interface) |
| Competitor A’s Style Guide | Uses generic categories (e.g., “Persuasive Essay,” “Executive Summary”) | High (clearly labeled as AI-generated) |
| Competitor B’s Analysis | References public domain historical figures only | Medium (context provided on source material) |
What Happens Next: Potential Revisions and Industry Impact
The scrutiny from tech media and ethical experts will likely force a response from Grammarly and Superhuman. Potential outcomes include a redesign of the feature to use generic expert categories (e.g., “seasoned investigative journalist” instead of a specific name), a more prominent and unavoidable disclaimer directly within the tool’s interface, or even a temporary suspension of the feature for reevaluation. The company’s next moves will be closely watched as a bellwether for how the AI industry navigates the delicate balance between innovative feature development and ethical responsibility.
Reactions from the Named ‘Expert’ Community
While no formal lawsuits have been announced as of June 10, 2026, reactions from journalists and writers named in reports have been mixed. Some express amusement or curiosity, while others voice clear discomfort. A common sentiment, expressed by several tech reporters on social media, is the unease of having one’s professional byline and hard-earned credibility potentially used to validate AI output they did not create and might not agree with. This incident may catalyze more creators to publicly state their terms regarding AI training and attribution, similar to the “No AI” badges some artists have adopted.
Conclusion
The Grammarly Expert Review feature underscores a critical juncture in AI development. As tools become more sophisticated, the line between helpful assistance and misleading impersonation grows thinner. The core issue is not the use of AI to improve writing—a valuable goal—but the method of attaching real human identities to synthetic output without permission. This case serves as a pivotal test for ethical AI implementation, emphasizing that true innovation must be built on transparency, consent, and respect for creator rights. The resolution will set an important precedent for how AI companies engage with the human expertise they seek to emulate.
Frequently Asked Questions
Q1: What exactly does Grammarly’s Expert Review feature do?
The feature uses AI to analyze a user’s writing and suggest improvements framed as coming from the perspective of specific famous writers, thinkers, or journalists. It is important to note the named experts are not involved in generating this feedback.
Q2: Did Grammarly get permission from the experts it names?
No. Grammarly’s parent company, Superhuman, has stated the experts are referenced because their published works are publicly available. The company’s user guide includes a disclaimer that there is no affiliation or endorsement.
Q3: What are the main ethical concerns raised by this feature?
Critics argue it misleads users by implying expert involvement, uses individuals’ names and reputations for a commercial product without consent, and potentially exploits the legacy of deceased creators.
Q4: Could this feature lead to legal action against Grammarly?
It is possible. Legal experts cite potential issues around personality rights, false endorsement, and the use of a person’s identity for commercial gain. No lawsuits have been filed as of early June 2026, but the situation is being monitored.
Q5: How does this relate to broader debates about AI and copyright?
It touches on similar themes of using existing creative work to train AI, but adds a layer of directly invoking a creator’s identity. This moves beyond copyright into areas of publicity rights and ethical attribution.
Q6: What should users of AI writing tools understand about features like this?
Users should critically evaluate AI suggestions and understand that tools referencing specific people are simulating a style based on public data, not providing genuine feedback from that individual. Checking for clear disclaimers is crucial.