O2's AI system Daisy has kept phone scammers on the line for up to 40 minutes per call. Built by O2 and agency VCCP, Daisy is a fully autonomous conversational AI posing as a chatty elderly woman. She does not block calls. She wastes the caller's time, indefinitely, while real targets stay safe. O2 seeded her dedicated phone number directly into the scammer databases, called mugs lists, used to identify vulnerable targets. The scammers profiled their way into a trap.

The reason this works is the same reason scams work in the first place: psychology, not technology. Phone fraud runs on a two-step sequence. Create anxiety, then offer relief. The caller who alarmed you is also the one who can fix it. That structure exploits how cognition degrades under stress. Urgency suppresses deliberation. Daisy inverts this entirely. She matches the demographic profile fraudsters actively seek, older, tech-uncertain, unhurried, and then refuses to respond to any pressure at all. The scammer's own targeting bias becomes the mechanism of failure. Apple's iOS 26 and Google's Android scam detection apply the same underlying logic at scale: insert a pause between the scammer's approach and the target's anxiety response, and the carefully constructed urgency collapses.

The full article is worth reading not for the conclusion but for the mechanism. It traces exactly how scammer psychology is structured, why intelligent people fall for these calls, and how each AI countermeasure maps onto a specific cognitive vulnerability. The design detail on Daisy's persona construction alone reframes what effective fraud defence actually looks like. It is not a smarter wall. It is a better pause.

[READ ORIGINAL →]