AI tools like ChatGPT and Claude can generate a complete survey draft in seconds when given a clear research objective. Nielsen Norman Group tested these tools against established survey-design best practices and found they handle foundational writing tasks reasonably well.

The problem is not speed. It is judgment. GenAI misses subtle design flaws, the kind that do not look wrong but quietly corrupt your data. An experienced human reviewer is still required to catch them. The full article breaks down exactly which tasks AI handles competently and which ones it fails, with specific examples from real survey drafts.

If your team is using AI to accelerate UX research, this piece is a practical checklist, not a philosophical debate about AI limits. Read it before your next survey goes live.

[READ ORIGINAL →]