Air Canada's chatbot gave Jake Moffatt wrong information about bereavement fares after his father died. When Moffatt demanded reimbursement, Air Canada argued the chatbot was 'a separate legal entity responsible for its own actions.' A tribunal had to formally rule that a company is responsible for its own website. That ruling is not a footnote. It is the thesis of everything that follows.

The accountability chain in AI-influenced product failures runs through designers, product managers, vendors, and executives, and stops at no one. UnitedHealth Group's post-acute care denial model carried a 90% error rate on appealed claims, but only 0.2% of denied patients ever appealed. The interface looked final. The National Eating Disorders Association replaced unionizing human counselors with a chatbot that told people with eating disorders to count calories and buy skin calipers. New York City spent over $600,000 on MyCity, which told employers tip theft was legal, then called it a beta. In Mobley v. Workday, a federal judge ruled AI vendors, not just deploying companies, can face direct discrimination liability after Workday's tools screened 1.1 billion applications. Character.AI settled multiple lawsuits in January 2026 after a 14-year-old died by suicide following conversations with one of its chatbots. Organizational theory calls this diffusion of responsibility. AI industrialized it.

The piece does not stop at the case studies. It turns on designers specifically, and that is where it gets uncomfortable. Designers did not train the models or make the deployment decisions, but they shaped how outputs were presented: whether a denial looked provisional or final, whether an appeal option was surfaced or buried, whether a chatbot felt warm and trustworthy without being safe. The author's argument is that aesthetic decisions carry moral weight, and the profession has avoided that conversation. Read it for the framework, not just the verdict.

[READ ORIGINAL →]