The UK Post Office deployed Fujitsu's Horizon point-of-sale system in the late 1990s. It had bugs that fabricated cash shortfalls. The Post Office prosecuted nearly a thousand postmasters for theft over 15 years. At least four died by suicide. Fujitsu and Post Office staff testified in court that the system was functioning correctly. It was not. This is the scandal Benedict Evans opens with, and it is not a story about technology. It is a story about institutional failure and deliberate dishonesty in the face of evidence.
Evans uses Horizon to dismantle the concept of AI Ethics as a coherent regulatory category. The Post Office scandal did not produce calls for a SQL regulator. FTX running eight simultaneous balance sheets did not produce calls for a spreadsheet regulator. The abstraction is wrong. AI is being used for parole decisions, mortgage approvals, drug discovery, cycle lane routing, and shoplifting detection. These are not the same problem. They require different expertise, produce different harms, and demand different oversight frameworks. Treating them as one thing called AI produces one thing called nothing.
Evans compares the current regulatory moment to writing laws about aeroplanes and motor-cars in 1910, and the analogy holds weight. Nobody knows what generative AI looks like at the end of this year, let alone in a legislative cycle. The piece is worth reading in full not for its conclusion but for the argument it builds along the way: that regulating a tool is the wrong frame, that the real questions are about institutions and accountability and specific use cases, and that the loudest AI ethics conversations are often the least connected to where the actual damage is occurring.
[READ ORIGINAL →]