Design awards built for human authorship are colliding with a world where generative AI can produce award-caliber work in hours. The UX Collective piece confronts a structural problem: judging criteria written before diffusion models and LLM-assisted design tools existed cannot fairly evaluate work where the line between human intent and machine output is genuinely blurry.

The argument is not that AI work should be disqualified. It is that the current framework has no coherent answer for how much AI involvement voids a human designer's claim to authorship, craft, or innovation. That gap matters because awards shape hiring, pricing, and what the industry treats as a benchmark.

The full piece is worth reading for its proposed criteria revisions, not just its diagnosis. If you judge, enter, or sponsor design competitions, the specifics of what it recommends changing are the reason to click through.

[READ ORIGINAL →]