AI skepticism is not career suicide, it is a research methodology. A 2025 Pew Research study found 50% of Americans are more concerned than excited about AI in daily life, and UX research leader Ashlee Edwards is betting that number looks different inside tech companies. Her argument is not anti-AI. It is pro-quality. She leads a team of 8 researchers and was asked, like every other UX team, to proactively identify AI workflow opportunities. Her response was to build a north star before touching a single tool: AI should support, not replace, research craft, assessment, and revision.

The specific guidelines Edwards created are worth reading in full because they draw a precise line most teams never bother to draw. Researchers on her team are prohibited from using AI to develop or refine research questions, a task she argues requires understanding business context at a depth that produces only banal output from LLMs. They are permitted to use AI for survey data cleaning, with mandatory documentation of the tool and process used. Every AI-generated summary must be footnoted. She also reframes the conversation with leadership around risk versus reward, not enthusiasm versus resistance, asking whether a given tool produces output that researchers would stake their credibility on.

What makes this piece worth reading beyond the guidelines is the diagnosis underneath them. Edwards names the actual cost of AI-accelerated UX work: rapid prototypes that are two screens connected by a tap, AI-generated research insights that send product teams in the wrong direction, and designs pushed to production that stall in engineering code review. She cites Judd Antin and Jess Holbrook on ResearchSlop as the empirical backbone. The argument is that AI is not changing the design and research process, it is adding review steps while consuming the institutional knowledge that makes those reviews possible in the first place.

[READ ORIGINAL →]