A designer noticed he was reaching for LLMs to generate high-fidelity outputs instead of thinking through problems himself. His response: build Thia, a multimodal AI collaborator running on Google AI Studio that watches you sketch via webcam and engages in real-time Socratic dialogue. The whole thing went from prompt to working prototype in one day, to a published GitHub release in three. Not a concept. Actual software.

The core argument here is worth sitting with. Tools like Google Stitch, Gemini, and AI Playground already offer voice and vision capabilities, but the author rejected each one on specific grounds: Gemini and Playground are not tuned to be critical, and Stitch jumps straight to high-fidelity output. Thia is deliberately calibrated to challenge weak ideas, not validate them. The slow upload-then-chat loop of standard LLM workflows was also disqualified for killing flow state. The design constraint was tight: the feedback loop had to be continuous, the thinking had to stay human, and the AI had to push back.

The piece links to a BBC Future report on AI degrading critical thinking skills and a McKinsey analysis arguing those same skills will be premium commodities post-AI disruption. That tension is the real subject of this article, not the app. The author hit his token spending cap twice in three days. Read it for the build process, the prompt architecture decisions, and the honest account of where AI tools currently fall short for early-stage design work.

[READ ORIGINAL →]