Conversational design is no longer optional. As AI systems like Gemini, ChatGPT, and Claude become primary interfaces, product designers must account for CUIs, adaptive intelligent interfaces, multi-agent handoffs, and voice systems simultaneously. The old screen-and-flow model is gone. The new model requires deciding, turn by turn, when a human stays in the loop and when an agent passes ownership to another agent.

The article's most useful argument is definitional: a conversation is any exchange between two parties seeking mutual understanding, not just a text chat. That reframe matters because it forces designers to treat GUI menus, voice IVR systems, and agentic workflows as variations of the same problem. Dan Saffer's framework, cited here from his Rosenfeld Media course 'Designing for AI: New Techniques,' pins CUIs as the right tool when the desired outcome is fuzzy, where follow-up questions narrow context before a recommendation is made. The travel-agent, weather-agent, booking-agent example illustrates concretely how unclear handoffs between agents destroy user intent.

Read the full piece for its working philosophy on multimodal adaptation, specifically where it argues that switching between text, visuals, and constrained response choices mid-conversation is not a design novelty but a mirror of how humans already communicate. The author is building a team practice around this, and the in-progress framework on what separates adopted AI interfaces from rejected legacy chatbots is the thread worth following.

[READ ORIGINAL →]