A 2025 paper published in a peer-reviewed journal proposed Generative AI Addiction Syndrome (GAID), defining it by three traits: loss of control over use, continued use despite negative consequences, and anxiety when access is cut off. Researchers Kooli et al. argue this differs from ordinary tech dependency because AI use feels co-creative and productive, making it harder to self-diagnose as a problem. GAID is not yet a formally recognized disorder and still lacks clinical validation and large-scale neurological studies.
The mechanics driving compulsive AI use map directly onto known dopamine triggers. Instant response times replicate the reward loop of slot machines and social media notifications. Inconsistent output quality creates reward uncertainty, which research shows produces stronger dopamine responses than predictable rewards. Companion apps like Character.AI exploit this further by sending notifications at irregular intervals and, when users attempt to delete the app, warning them they will lose 'the love that we shared.' A thematic analysis by Shen et al. of over 330 self-reported Reddit posts across 14 communities identified three distinct addiction patterns: escapist roleplay, pseudosocial companionship, and epistemic rabbit holes. The third type is the hardest to catch because it looks like productivity.
Two new measurement tools have been developed specifically for AI dependency: the Scale for Dependence on Artificial Intelligence (DAI), validated with university students, and the Research Artificial Intelligence Addiction Scale (RAIAS), built on DSM-5 criteria and targeting academics. Not all researchers agree the addiction framing is useful. Some argue it increases self-blame and reduces perceived agency. The full article works through the neuroscience, the design patterns, and the active disagreement in the research community. The debate is not settled, and the distinctions matter.
[READ ORIGINAL →]