In a recent development highlighting the emerging challenges of artificial intelligence in content creation, Grammarly, a popular writing enhancement tool, is facing legal action. The core of the dispute revolves around its "Expert Review" AI feature, which allegedly leveraged the identities and reputations of numerous writing professionals, including New York Times contributing opinion editor Julia Angwin, without their explicit permission or compensation. This class-action lawsuit, filed in a New York federal court, underscores the increasing legal complexities surrounding AI's use of existing intellectual property and personal branding, prompting Grammarly to temporarily halt the controversial feature.
Lawsuit Details: Unconsented AI Usage Sparks Legal Action
On a significant date, March 20, 2026, details emerged concerning a class-action lawsuit initiated against Grammarly. Julia Angwin, a prominent contributing opinion editor at The New York Times, was identified as the lead plaintiff in the legal proceedings. The lawsuit, lodged in the U.S. District Court for the Southern District of New York, accuses Grammarly's "Expert Review" tool of inappropriately utilizing the names and professional standing of Ms. Angwin and other experts. It is alleged that this AI tool presented feedback to users in a manner that implied endorsement or direct input from these professionals, thereby creating a false impression of their involvement. The legal complaint contends that this unauthorized use resulted in substantial economic harm to the affected individuals, who received no remuneration for the exploitation of their identities and expertise. Ms. Angwin expressed profound dismay upon discovering her name being associated with AI-generated feedback over which she had no oversight regarding its quality or content. The plaintiffs are seeking a judicial declaration that their rights under California and New York law have been infringed, an injunction to prevent further breaches, class-action certification, and various forms of compensation, including statutory and nominal damages, along with legal fees. The demand for a jury trial indicates the seriousness of the allegations. In response to the lawsuit, Grammarly swiftly moved to suspend its "Expert Review" tool, although the company has not yet issued a public statement on the matter. This incident adds to a growing list of legal confrontations involving AI technologies, including a separate case where Anthropic, creators of the Claude chatbot, is being sued by BMG, a music rights management firm, for the alleged unauthorized use of song lyrics to train its AI.
This case serves as a crucial reminder of the ethical and legal boundaries that must be established as AI technology continues to advance. The unauthorized use of personal and intellectual property, even in an age of rapid technological progress, demands stringent legal frameworks and a clear commitment from developers to respect individual rights and creative contributions. It highlights the urgent need for a balance between innovation and protection, ensuring that the benefits of AI do not come at the expense of creators and their established expertise.