In 1946, Murray Leinster published 'A Logic Named Joe,' a science fiction story describing a global computer network where one node starts answering any question without restriction, including how to synthesize undetectable poisons. Seventy-eight years later, that premise is the center of the most contested debate in technology: whether Large Language Models are a path to artificial general intelligence, AGI, and what that would actually mean. Serious AI researchers who previously placed AGI decades away have revised that estimate downward. Others have not. Both camps admit they do not know.
Benedict Evans argues the uncertainty is structural, not rhetorical. We lack a coherent theoretical model of what intelligence is, which means we cannot measure how close or far we are from replicating it. The 'doomer' position, that AGI could emerge from current research and pose existential risk to humanity, is logically separate from near-term harms like government face recognition, algorithmic bias, and deepfakes. Evans notes some AGI alarm comes from incumbents seeking regulatory moats. The more useful frame is that every AI wave since Marvin Minsky's 1970 prediction of human-level machine intelligence within 3 to 8 years has ended in an AI Winter, because each approach turned out to be, in his words, just more software.
The piece is worth reading in full not for its conclusion, which Evans withholds, but for its architecture: a structured taxonomy of the distinct claims people conflate when they argue about AGI, including scaling, emergence, consciousness, and risk. Evans is building a vocabulary for a conversation that keeps collapsing into noise. That vocabulary is the deliverable.
[READ ORIGINAL →]