A Max Planck Institute study of 280,000 YouTube academic videos found that spoken use of 'delve' rose 48%, 'realm' 35%, and 'adept' 51% in the 18 months after ChatGPT launched in November 2022. Crucially, roughly half those appearances showed no sign of scripted reading. People are not copying AI text. They are absorbing its vocabulary and speaking it back naturally. Lead researcher Hiromu Yakura calls this internalizing a 'virtual vocabulary into daily communication.' That feedback loop, human text training AI, AI redistributing it, humans re-absorbing it, is the core problem this article documents.
The em dash discourse is a case study in how fast these signals break down. Widely flagged as a 'ChatGPT hyphen,' it predates large language models by centuries. Dickinson used it. Woolf used it. Early models overused it, the pattern got noticed, and now both writers and models are adjusting away from it. The same decay applies to the growing blacklist of suspect phrases like 'at its core,' 'furthermore,' and 'tapestry.' Montclair State University advised staff to treat strong grammar itself as suspicious. The piece tracks exactly how these heuristics become self-defeating the moment they go public.
The grammar question is where the article earns its read. Surface correctness is measurably improving with AI assistance, especially for non-native speakers. But a 2025 MIT Media Lab EEG study found signs of cognitive atrophy in writers relying heavily on chatbot assistance, small sample, not peer-reviewed, but consistent with a larger pattern. The distinction the piece draws is precise: AI correction is not building grammatical competence, it is substituting for it. The wound stays open under the bandage. Whether that matters long-term, and what gets lost when the tool goes away, is the argument still unresolved.
[READ ORIGINAL →]