Grammarly's 'Expert Review' feature, launched in August, is generating AI feedback attributed to real journalists who never consented to be included. The Verge reporter Alex Heath found the tool producing comments styled as if written by editor-in-chief Nilay Patel, editor-at-large David Pierce, and senior editors Sean Hollister and Tom Warren. None of them gave permission.
This is not a fringe edge case. Wired separately reported the same feature impersonates deceased professors. The pattern is consistent: Grammarly is training its feedback system on real people's identities and deploying those identities as product features without authorization.
The full piece at The Verge is worth reading for the screenshots alone, which show exactly how the attributed comments appear to users. The deeper question the article raises is not just about consent, but about what legal and ethical frameworks, if any, currently govern a company profiting from simulated expert voices.
[READ ORIGINAL →]