Generative UI has its first real foothold. Nielsen Norman Group researchers Sarah Gibbons and her co-author, who formally defined genUI in March 2024, now report that simple interactive elements like buttons, checkboxes, and form fields are appearing contextually inside AI chat interfaces, generated in real time by the AI itself.

The distinction that makes this matter: AI-assisted design helps developers build interfaces faster, but the end user still hits a traditional static UI. GenUI skips that entirely. The AI generates interface components on the fly, shaped to the individual user's context in the moment. That gap between theory and shipping product has closed faster than the original researchers expected.

The full article is worth reading for how these elements are being triggered, what signals the AI uses to decide when a button beats a text prompt, and what the early UX patterns reveal about where this goes next.

[READ ORIGINAL →]