All Episodes
45 minEpisode 155

155: UNDERSTANDING USER RESEARCH: SURVEYS, MYTHS, AND THE ROLE OF AI WITH ELS AERTS

SpotifyApple
FEATURING
Els Aerts

ELS AERTS

AGConsult

User research expert specializing in survey methodology and UX optimization.

The democratization of user research sounds empowering until you realize most people approach it like baking a cake with whatever's in the fridge, no recipe required. Surveys have become the most powerful and most abused tool in the UX toolkit, with AI promising to make everything faster while simultaneously tempting teams toward synthetic shortcuts that produce nothing but generic, useless insights.

Els Aerts, co-founder of AGConsult and a user research veteran since the pre-Google Analytics era, brings three decades of perspective to this chaos. Her verdict on synthetic users is unequivocal: they cannot replicate the frustration, confusion, or genuine reactions of real people interacting with real products. The path forward requires returning to fundamentals, mastering targeted surveys, knowing when AI helps versus when it hallucinates, and accepting that expertise cannot be democratized without consequences.

Evolution of User Research MethodsThe Dangers of Synthetic User ResearchSurvey Design and Common MistakesDemocratization of User ResearchAI as Research AssistantCareer Specialization in UX

KEY TAKEAWAYS

  • Synthetic users produce only generic, universal problems that apply to every product, making them useless for evaluative research that identifies specific improvements.
  • Targeted online surveys that trigger at precise points in the customer journey represent the biggest methodological breakthrough in qualitative research, but only when designed by someone who understands question fundamentals.
  • AI excels at repetitive research tasks and rubber-ducking ideas, but its analysis must be verified because it tends to hallucinate insights that aren't actually supported by data.
  • Expertise cannot be skipped: approaching user research without basic training is like asking an intern to configure enterprise analytics and expecting accurate data.
  • Career success in experimentation comes from going deep in one specialty rather than attempting mastery across analytics, copywriting, and research simultaneously.

SHOW NOTES

The Pre-Analytics Era and What It Taught Us

Before Google Analytics existed, user researchers had a limited but focused toolkit. Moderated user testing, interviews, and basic surveys dominated the field. There was no surgical targeting, no scroll-depth triggers, no behavioral segmentation. What researchers lacked in tools, they compensated for with fundamental skills in asking the right questions and interpreting human behavior.

This constraint bred expertise. When you cannot automate or scale, you learn to extract maximum value from every interaction.

Targeted Surveys Changed Everything

The ability to trigger surveys at precise moments in a customer journey represents the single biggest advancement in qualitative research methodology. A survey appearing after a specific behavior, on a particular page, at a defined scroll depth can yield insights that broad-blast questionnaires never could.

But this power created new problems. Survey tools became accessible to everyone, regardless of training. The result is a landscape littered with poorly constructed questions, leading prompts, and data that looks useful but leads teams in wrong directions. Erica Hall's famous hatred of surveys makes sense when you examine what most organizations actually deploy. The method itself remains powerful. The execution typically fails.

Accessibility without education produces what some call research theater: activities that look like research, feel productive, and generate reports, but deliver no genuine insight.

Why Synthetic Users Fail Completely

AI-generated synthetic users cannot replicate what matters most in evaluative research: genuine human frustration, confusion, and emotional response to specific product problems. When asked to simulate user feedback, AI produces the same generic complaints that apply to literally every digital product. Navigation is confusing. Information is hard to find. The process takes too long.

These observations help no one. Real users reveal specific friction points unique to your product. They articulate problems you never anticipated in language that exposes the gap between your mental model and theirs. Can synthetic users serve market research purposes? Perhaps. For evaluating and improving existing products? The answer is an unequivocal no.

The Democratization Problem

Nobody expects an intern to configure Adobe Analytics correctly on their first day. The organization understands that misconfigured tracking produces bad data, and bad data produces bad decisions. Yet the same organization might hand survey design to someone who has never studied question construction, response bias, or research methodology.

The comparison is apt: both produce data that looks legitimate while systematically misleading decision-makers. The difference is that analytics errors often surface through obvious inconsistencies, while flawed research produces confident-sounding insights that quietly steer strategy in wrong directions.

AI as Research Partner, Not Replacement

AI functions like having a full-time intern for repetitive tasks. It accelerates transcription, initial coding, and draft analysis. It serves as an excellent rubber duck when no human colleague is available to pressure-test research questions. But what happens when you trust its analytical conclusions without verification?

Hallucination remains a genuine risk. AI will confidently present insights that sound plausible but aren't actually supported by the data. The expertise requirement hasn't disappeared. It has shifted from conducting repetitive tasks to validating AI output and knowing which questions to ask in the first place. The work has become more interesting, not necessarily easier.

Going Deep Beats Going Wide

Early career experimentation professionals often want to master everything: analytics, copywriting, research, statistics, design. The instinct is understandable. Breadth provides context and makes collaboration easier.

But genuine expertise requires depth. Finding someone who excels equally at quantitative analytics, persuasive writing, and qualitative research methodology is nearly impossible. The most valuable career advice for newcomers: explore broadly at first, identify what genuinely interests you, then commit to mastering that specialty. The fundamentals of talking to people, asking good questions, and interpreting responses haven't changed much since the 1990s. The tools evolved. The core skills remain.

WATCH ON YOUTUBE

QUESTIONS ANSWERED

ENJOYING THIS EPISODE?

Subscribe to get notified when new episodes drop. Available on all major podcast platforms.

Subscribe Now