Home > User Experience > Pitfalls To Avoid In User Research

Pitfalls To Avoid In User Research

Pitfalls To Avoid In User Research

Part 1 / 2

The Role of User Research in Design

User Research, also known as User Experience (UX) Research, is the driving force of UX and user-centered design. In fact, it’s arguably the most distinguishing characteristic of UX. It’s what enables us to make truly informed, data-guided product decisions rooted in empirical feedback from real users — instead of relying solely upon the assumptions and intuitions of subject matter experts or vague numbers. User Research tells a coherent story behind — and among — the numbers in a big data era where more analytics don’t necessarily mean more clarity.

Oftentimes, more numbers mean more questions. User research, both qualitative and quantitative, is what makes UX such a wildly successful approach toward product development since it takes so much guesswork out of delivering fantastic experiences.

Why We Wrote This For You

There’s a ton of online content about how to conduct user research, but few covers which potential problems to avoid throughout a study. This is why we’re publishing an article about the pitfalls of user research — in other words, what not to do — based on hard-won experiences by teams who’ve been there and done that.

In this piece, we’ll highlight two important pitfalls to avoid in each of the four phases of a typical research project: Planning, Recruiting, Data Collection and Reporting.

Current Focus

This article is broken into two parts. Here you’ll find the first four pitfalls, corresponding to the Planning and Recruiting Phases. If you’re looking for pitfalls to avert in the Data Collection and Reporting Phases, then please click the link at the bottom of this piece.

  • Pitfall #1: Failing to Consolidate Background Information
  • Pitfall #2: Failure To Utilize Mixed-Methods
  • Pitfall #3: Settling For Poor Sample Quality
  • Pitfall #4: Banking On An Insufficient Sample Size
  • Pitfall #5: Employing Suboptimal Moderators
  • Pitfall #6: Not Assigning Multiple Note Takers
  • Pitfall #7: (Faking) Directional Data as Definitive
  • Pitfall #8: Producing The Wrong Type of Deliverable


Planning is comprised of the non-recruiting activities leading up to data collection, including project scoping, kickoff meeting(s), product demos and preparing for testing/observation.

Pitfall #1: Failing To Consolidate Background Information

There’s always a background story pushing the need to conduct a research study. Clarifying the reasons for commissioning a study can shed tremendous amounts of light on how best to illuminate the path forward. For instance, if the ask is to evaluate an existing product’s UX, then the project would benefit immensely by first consolidating:

  • Relevant business metrics and analytics
  • Customer support and call center data
  • Customer insights from the sales department
  • Tacit knowledge from the product team and SMEs
  • Social Media data
  • Ratings/Reviews from both internal sources and across the internet
  • Previously completed primary and secondary research

Beginning a research project with these relevant background materials sets teams up for success with a highly informed starting point. Sometimes, simply consolidating all that knowledge trapped in various parts of the organization obviates the need for even conducting a UX research study — ultimately saving time and resources.

An added benefit to this approach is that it’s a great excuse to network throughout your organization and spread the good word of UX. Effective UXers are masters at this, which is just one of the reasons why they command such high salaries.

Pitfall #2: Failure To Utilize Mixed-Methods

The logical point of all research is to uncover unknown phenomena and/or to help validate existing knowledge. When it comes to achieving this in a professional setting,

overreliance on a single method can cause serious issues down the line. There are hundreds of methodologies to choose from, yet many teams run the same type of experiment over and over on different stimuli…

“If all you have is a hammer, everything looks like a nail.”

Convincing decision-makers to act upon research findings can be hard to do with a single data set. However, that key outcome becomes much easier to achieve when you’ve leveraged a mixed-methods approach. This is because the triangulation of multiple datasets tells a stronger story (especially if you started by avoiding pitfall #1, above!).

For instance, instead of running a UX Test every time you want to evaluate design decisions, try complementing it with a pre-test or post-test survey. In this manner, the deep insights yielded by UX Testing can be bolstered by the survey’s qualitative and quantitative insights from a larger group of people. Alternatively, if you’re assessing the need for particular features, instead of interviewing or surveying users about their interest, look to analytics instead. Begin with Fake Door or A/B Testing to see how many people click on a proposed feature, tallying up the votes to make your next decisions based upon statistical significance. Then, follow up with intercept-based UX Testing to dig further into how the features should be implemented and to assess in-context viability. Or, when you’re approaching a product redesign, consider triaging the existing experience from several angles: perhaps by analyzing key metrics, executing summative research, and following all that up with a series of in-depth interviews to clarify pain points.

In multi-method approaches such as these — or any similar logical combinations — you can gain higher clarity and tons more confidence in the research findings. Of course, this also means you get to produce far more compelling results that have a higher likelihood of mobilizing decision-makers to act.


Recruiting consists of identifying, selecting, and scheduling participants for the study. The degree to which participants represent your target users directly affects the validity, generalizability and overall quality of your research.

Pitfall #3: Settling For Poor Sample Quality

Even today, recruiting continues to be a bottleneck for many kinds of studies. Speed vs. quality is almost always an inevitable tradeoff. Depending on factors such as how hard your participants are to find and what channels you’re recruiting from, it can take anywhere from hours to a month or more to line up the right people.

For example, finding regular consumers is easy thanks to online panels such as UserInterviews.com. However, it can still take several weeks to find and schedule elusive profiles such as highly paid professionals, specialized technicians, or exclusive types of b2b users through appropriate means. When the clock is ticking on a project timeline, you don’t want to be in a position where you have to settle for less than ideal participants.

Some tips for staying on-time and on-track:

  • Be realistic about how long it will take to recruit target users.
  • Scope out timelines accordingly, as the validity of your research hinges largely on how representative your participants are of your target user population.
  • Don’t settle for less-than-ideal participants because of a fairly arbitrary deadline, use of inappropriate recruiting channels (online panels vs. traditional research recruiters vs. customer lists, etc.) and/or implementing lazy screening criteria (for behaviors, demographics and perspectives).

Pitfall #4: Banking On An Insufficient Sample Size

Nearly everyone in UX has heard the extremely prolific phrase: “5 to 7 users are enough”. Truth is, although it’s blindly regurgitated at so many discussions of how many participants to include in a study — it’s actually wildly misunderstood.

While based on the research findings of Jacob Nielsen almost 20 years ago, his overarching recommendation is actually intended for very specific situations.

For instance, when identifying “usability” problems for digital products with a single group of users within a small problem scope, 5–7 participants might work as they can sometimes catch most usability issues.

In that article, Nielsen also recommends multiple small sample-sized tests instead of one large test, which is something else a lot of UXers miss in practice too. There’s a ton of assumptions, limitations and caveats in there, right? Right.

So when it comes to solving most business and design problems, 5–7 participants simply don’t suffice. How many real-world projects fit those precise specifications above? Not many. It’s important to think critically about the scope of your problem space, the diversity of your target users, and what it’ll take to convince stakeholders to trust the findings and act decisively as a result.

If any of these topics appeal to you, or you’re considering a career in UI and/or UX, then check out Springboard’s UI/UX Career Track, an online bootcamp that’s guaranteed to get you a job.

Source link