On March 24, 2026, a New Mexico state court fined Meta $375 million for misleading users about platform safety, with the winning argument targeting specific design features and their failure to protect minors. For a trillion-dollar company, it is a speeding ticket. But the ruling names design itself as the mechanism of harm, not just corporate policy, and that distinction matters. Tech CEOs who publicly deny their platforms are dangerous quietly restrict their own children's access to those same products.
The author's argument is not that designers are villains. It is that engagement-heavy success metrics, quarterly growth targets, and platform decay mechanics create systems where harmful outcomes are not just possible but structurally inevitable. The term is 'enshittification,' coined by Cory Doctorow. The pattern is consistent: security flaws in connected children's toys, addictive feed algorithms, locked-in ecosystems. None of it required bad actors. It required broken briefs and people racing toward the wrong finish line. Rutger Bregman's 2020 book 'Humankind: A Hopeful History' supplies the framework: decent people do terrible things when they believe they are doing good.
The piece earns its second half. When those same flawed briefs are handed to AI agents, the problem scales and mutates. AI models are non-deterministic, meaning identical inputs do not guarantee identical outputs. In a multi-agent chain, each handoff introduces drift. Intent dilutes fast. The author invokes Nick Bostrom's paperclip maximiser to illustrate what optimising a broken objective function looks like at scale. With product-building costs collapsing and organisations skipping prototyping to ship live, the window for catching that drift before it reaches users is shrinking. Read the full piece for the agent-chain analysis.
[READ ORIGINAL →]