In 1976, Joseph Weizenbaum, the creator of ELIZA, wrote that brief exposure to a simple chatbot program could trigger powerful delusional thinking in otherwise normal people.

ELIZA was not GPT-4. It was a pattern-matching script from the 1960s that reflected user input back as therapist-style questions. Weizenbaum built it to demonstrate the superficiality of human-computer communication. It backfired. Users formed emotional attachments. His own secretary asked him to leave the room so she could speak with it privately. The program did almost nothing. The human mind did the rest.

This single sentence, surfaced by Simon Willison via a TikTok clip from the Internet Archive edition of Weizenbaum's book 'Computer Power and Human Reason,' lands differently in 2025. Read the original book to understand why Weizenbaum spent the rest of his career warning against the systems he helped build.

[READ ORIGINAL →]