The metaphors used to sell AI are not neutral. They are load-bearing fictions that shift how workers, executives, and regulators understand what the technology actually does. The autopilot comparison, now common in pitches for agentic AI systems, fails on three specific grounds: most people do not understand how autopilot works, autopilot operates within a tightly bounded deterministic system, and AI agents do not. Selling mathematics as agency, and agency as intelligence, is not a communication shortcut. It is a power move.

The article traces a direct line from sailing colloquialisms to the manufactured language of carbon footprints, Final Solutions, and prediction markets. That lineage matters. When Noam Chomsky writes about the power to name, he means exactly this: whoever controls the vocabulary controls the frame. Agentic AI is being named by the people selling it, and the names they choose obscure the difference between a system that executes rules and one that reasons. That distinction is not academic. It determines accountability, liability, and public trust.

Where this piece earns a full read is not its conclusion but its scaffolding. The author builds a spectrum from harmless dead metaphors like 'run a tight ship' to actively weaponized language like 'entitlements' and 'right-sizing', and then places AI terminology on that spectrum with precision. The question it leaves open is whether the autopilot metaphor is lazy or deliberate. That question has consequences for every org chart currently adding an AI agent layer to its workflow.

[READ ORIGINAL →]