Teams are asking 'Can AI do it?' before they ask 'Why are we doing it?' Designer and researcher Peter Zakrzewski calls this the Decision Flip, and his year of sustained experimental research with AI systems produced a specific diagnosis: current AI operates with what he terms an Inversion Error, a Symbolic Giant built on an Enactive Void. These systems can write about gravity with technical fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand.
Zakrzewski anchors his argument in two under-cited frameworks. Peter Naur argued in 1985 that the most valuable thing a programmer produces is not code but the theory of the problem, a mental model that cannot be extracted into a document or delegated to a tool. An AI will generate thirty options without once asking why. Greg McKeown's Essentialism adds the second half: in a world of infinite AI-generated form, the scarce resource is not execution, it is the disciplined pursuit of the right problem. The designer who ignores twenty-nine of those options and knows exactly why is the one holding irreplaceable structural knowledge.
The piece builds toward a direct challenge to the Silicon Valley framing that AI is now the More Knowledgeable Other, the Vygotskian figure who scaffolds learning for a less capable party. Zakrzewski inverts this. The AI is the learner with a catastrophic gap in its Zone of Proximal Development, and the designer is the MKO the system actually needs. The full argument, including his account of asking an AI what it feels like to know the word 'weight' without ever experiencing gravity, is worth reading for how it reframes who is teaching whom.
[READ ORIGINAL →]