Skip to main content
Grammarly and Superhuman logos with text about AI cloning halt and writer inbox opt-out.

Editorial illustration for Grammarly halts AI cloning of experts; Superhuman adds opt‑out inbox for writers

Grammarly Halts AI Expert Style Cloning Controversy

Grammarly halts AI cloning of experts; Superhuman adds opt‑out inbox for writers

2 min read

Grammarly’s decision to stop training its AI on the writings of public figures without explicit consent has reignited a debate about the boundaries of machine‑generated content. The move follows a wave of criticism that the company was effectively “cloning” experts, repurposing their style for a commercial product. In parallel, email client Superhuman tried to address a similar concern among its user base by rolling out an opt‑out inbox that lets writers decline participation in the platform’s “expert review” process.

Yet, just a day after the feature went live, the firm admitted the solution fell short of expectations. CEO Shishir Mehrotra issued a statement acknowledging the misstep and outlining next steps. The apology underscores how quickly tech firms must recalibrate when their automation touches the reputations of professionals who never signed up for it.

“We are sorry and will do things differently going forward.”

We are sorry and will do things differently going forward." Yesterday, Superhuman responded by launching an email inbox for writers to opt out of "expert review," but now acknowledges that it didn't go far enough. Superhuman CEO Shishir Mehrotra also apologized and commented on the company's plans in a post on LinkedIn, saying he hopes to build a future where "experts choose to participate, shape how their knowledge is represented, and control their business model." Superhuman CEO Shishir Mehrotra: Back in August, we launched a Grammarly agent called Expert Review.

Will these steps satisfy the creators? Grammarly announced it will stop using AI to clone experts without permission, promising to ‘reimagine’ its Expert Review feature and give experts a choice about future participation. Superhuman, after disabling Grammarly’s “expert review” AI that claimed suggestions were “inspired by” real writers—including the Verge editor‑in‑chief—rolled out an opt‑out inbox for writers.

Yet the company quickly admitted the measure fell short, and CEO Shishir Mehrotra issued another apology while outlining further plans. Both firms express regret and a willingness to change, but concrete details about how the new opt‑in model will function remain vague. The effectiveness of the revised Expert Review system, and whether writers will actually feel protected, is still unclear.

Ultimately, the moves signal a shift toward more explicit consent, though the real impact on AI‑driven content assistance will need to be observed as the policies are applied. Time will reveal whether these consent mechanisms can restore trust among the writing community and set a precedent for similar services.

Further Reading

Common Questions Answered

Why did Grammarly halt its AI training on public figures' writings?

Grammarly faced significant criticism for effectively 'cloning' experts' writing styles without explicit consent. The company has now committed to stopping this practice and plans to 'reimagine' its Expert Review feature to give experts more control over how their work is used.

What steps did Superhuman take to address concerns about AI-generated content?

Superhuman launched an opt-out inbox that allows writers to decline participation in the platform's 'expert review' feature. However, CEO Shishir Mehrotra acknowledged that this initial step was insufficient and expressed a desire to create a system where experts can choose how their knowledge is represented and controlled.

How are tech companies responding to ethical concerns about AI content generation?

Companies like Grammarly and Superhuman are increasingly recognizing the need for explicit consent and user control in AI-driven content generation. They are implementing new policies that prioritize expert choice and transparency in how AI systems use and interpret professional writing.