AI coding agents can read every line of your codebase and still produce generic output. That is the central argument of this piece from UX Collective, and it is backed by a concrete demonstration: an agent with full codebase access suggested a notification center, an activity feed, and an onboarding wizard for a product whose core interaction is a conversation. Every suggestion violated a design principle the team had already decided. The code never recorded those decisions.

The author identifies seven distinct categories of knowledge an agent needs beyond the code itself: architecture, functionality, tech-stack conventions, brand voice, visual identity, interaction principles, and product positioning. Code carries roughly 40% of this. The other 60% lives in Figma files, Slack decisions, and institutional taste. To close that gap, the author built a structured Claude Code skill directory with a router file, a design context document spanning 11 sections, a component-to-Figma index, and a token cheatsheet. First-pass output stopped feeling like it came from a YC batch.

What makes this worth reading in full is not the conclusion but the taxonomy. The seven-part breakdown is a working framework for any team shipping with coding agents, and the directory structure is reproducible. The piece also engages seriously with context engineering as a discipline, citing Martin Fowler's team, Brad Frost on design system documentation, and Katherine Yeh on skill routing. The hard problem it names, deciding what the agent should know before it acts, is the problem most teams have not yet formalized.

[READ ORIGINAL →]