SoundingBox will end operations on August 1, 2021. Read the announcement. [x]

# User Testing Guide

User testing is a habit of great design teams. Here we look at how it works, what it's not, and how to go beyond the basics for deeper insights.

Building a site or app where business and customer goals align is tough. High performing teams know this, and use this knowledge to bring real people into their UX design process. One important way to do this is user testing.

One important caveat upfront. There's no one right way to do research. Things will be messy. Things may not go as planned. But things will be learned. Your team will get better at it. There's only one real mistake: to do no research at all because it seems too hard or too expensive.


User tests can also be called "usability tests" are a form of UX research. Whatever name you prefer, they're all pretty much the same thing.

# What You Can Test

Nearly any UI can be tested in a user test. Live websites, website prototypes, touch apps and touch app prototypes—all are fair game. But hold on a sec, it's a "user test" right? Aren't we testing users and not UIs? Or is it UIs that we're testing?

The answer is you're testing both. You're testing the UI, but you're also testing the extent to which that UI can be used by the user, the tester, or research subject. We think it helps to break the things you can test into two groups: one group for things related to the UI, and one group for things related to the person.

  • UIs - Items related to the thing you're making such as a site or an app.
  • People - Items related to the person, the user, or research subject.

# The UI Level

You can also refer to the UI as your stimulus in the context of user testing. These are the things that you want to put in front the person who will provide feedback in the user test. And just as there are many elements of a UI, there are many things you can test. A short but not at all exhaustive list might include:

  • Prototypes - Mockups made using tools like InVision, Figma, XD or Axure.
  • Flows - Stepped processes like registering, purchasing.
  • Content - Help pages, videos.
  • Websites - Live website, including competitors.
  • Visual design - Design comps or flat pages showing a visual approach.
  • Animations - Parts of prototypes or live sites that convey ideas.
  • Messaging - Value propositions, headings, labels.
  • Layouts - Information architectures, element positioning, responsiveness, progressive disclosure.
  • Navigation - Systems for getting around.

Not every test will encompass every item listed here. A test that tried to would have a scope that's far too large. It all depends on your goal, which is something that should be focused.

# People

User testing is about people and their experiences. It's a practice designed to help you get into other people's headspace and see if what you've made matches up with their way of thinking and doing. Let's enumerate some areas that can come into play.

  • Interactions - What physical actions do people take as they work through the task? Are these actions what you expect or do they conflict with what you've made?
  • Understanding - Do people get what they're looking at right away? What questions do they have?
  • Mental models - What frame of reference are people bringing to their interactions? Is it what you expect them to have, or something different?
  • Expectations - What sorts of social, personal, or cultural expectations are people bringing to their experience?
  • Feelings - How do people say they feel as they interact with your artifact? Is it something you expect?
  • Blind spots - What do people neglect to see or do as they have their experience?
  • Future actions - What future actions (like converting) are they likely to take after their experience?

Wrapping your head around all of the areas that come into play when someone has an experience can help you formulate your research goals and help refine your observation skills when you're making sense of your results.


To create a user test on SoundingBox see our step-by-step how-to guide.

# What You Can Learn

People often think that user testing is only about finding usability problems. Were people successful? At what rate did they fail? While you certainly will identify usability problems, this view can be a bit narrow. Part of the power and appeal of user testing is that all sorts of unexpected and sometimes powerful insights emerge which should not be ignored. Top teams engaged in creating breakthrough products are fully invested in this idea.

Let's go through a short but not comprehensive list of the types of things you can take away from user testing.

  1. Get design direction - Find out what's working and what's not—from layouts, to visual treatments, branding, navigation, and nomenclature.
  2. Set priorities - Determine what to fix now (is it a showstopper?) and what can be safely delayed.
  3. Build consensus - Bring the team together around a shared understanding of the challenges people face, building empathy. Get buy-in for making changes.
  4. Triangulate - Does your data confirm or conflict with other data you're collecting using methods like web analytics or surveys?
  5. Innovate - As your tests take shape you're going to get glimpses of things you and your team simply hadn't thought of. These are the seeds of true innovation.

# How User Testing Works

Given all the nuances of the things you can test and learn from user testing, you might think that user testing is complicated. But if you look at the fundamentals, how it works is simple. All user tests boil down to the following steps:

  1. Give someone things to try.
  2. Watch what they do.
  3. Listen to what they say.
  4. See if they succeed, hesitate, or have questions.
  5. Try to understand what's behind their issues.

That's the gist, and, of course, there are many ways to vary this, but at its core user testing is about watching, listening, and learning. It is about observing people.

# What User Testing is Not

As we've mentioned, user testing goes by other names like usability testing, UX testing, or user research. All of these processes are more or less the same: they help you see your app or site through the eyes of someone else.

User testing differs from other forms of research and data collection. Understanding the differences between different research techniques is a critical step on your journey to becoming a research blackbelt.

# Passive Data Collection

There are a handful of passive data collection techniques. They're passive in that people aren't actively providing their feedback. You're attempting to get at this data through passive logging, usually with a script that you add to your pages or embed in your app.

# A/B Testing

Tools like Optimizely do a great job letting you compare designs with a large sample taken from your actual site visitors. Which version performed better and got people down the funnel more often? A/B testing can give you that answer definitively. It's not a bad technique if you've got the site visitors, and you don't care about using them in this way. But A/B testing doesn't offer you much of a path for comparing early-stage prototypes that may have different approaches to design. (SoundingBox has a system for this which we call prototype A/B testing).

# Web Analytics

Web analytics tools like Google Analytics or Kissmetrics have been around a long time. Want to know which pages get the most traffic? Want to see where people bounce? Web analytics is your tool. Running a site without web analytics tracking is like driving at night without headlights. But while web analytics are great at telling you what happened (people bailed on this page), it's not going to tell you much about why they bailed—something that user testing can tell you.

# Opinion Research

While user testing certainly can capture opinion-oriented data (we embrace this through questions), sometimes people think they can get by with opinions alone. You can't. You also need to capture behavior.

# Surveys

Survey tools like Survey Monkey or Typeform can be a useful tool. They give a broad read on how you're doing or what features people might want. Surveys alone can't capture behaviors (what people do and don't do)—they simply can't capture the kind of rich information you get with user tests.

# Satisfaction Research

Tools like OpinionLab, Hotjar, and Usabilla are a hybrid of surveys and web analytics. These are the widgets you see on sites asking you to rate a page. They can sometimes highlight problem pages, and when combined with the ability to capture a screenshot of something that went wrong, they may help discover things that are broken. But you're a little out of luck if you want to get at the kinds of things user testing does.


Don't confuse user testing with QA testing. QA testing is something you're going to want to do on a finished product, whereas user testing is something you do as early and as often as possible while you design and develop a product.

# Adopting The Researcher Mindset

Sometimes people want to substitute some of the research methods mentioned above for user testing. Hey, we're already getting this info from Hotjar and Optimizely, aren't we? It's tempting to think you are. That's the beauty of passive data collection methods. You sit back and have the feeling that you have all the information at your fingertips.

User testing requires a different mindset. This mindset says humbly: "By being so close to the thing we're making, we're likely to miss how others will experience it—what others will think about it."

In a perfect world, this mindset is shared by all members of a design team engaged in a design process. When viewed in this light, user testing is more of a practice or discipline than a tool. It's something that a team is doing together to bring real people into the process.

# When to do A User Test

In light of the above—user testing is a practice—teams new to it may ask, when is the best time to test? The answer is anytime you're ready to absorb feedback, which ideally would be often. But because getting feedback can sometimes be a little painful, teams new to testing are inclined to wait until the last possible moment. Waiting too long is a bad idea since by then it may be too late to make changes inexpensively.

So when to test? Early and often.

# Prototyping

Prototyping should be at the center of your UX design process.

The thing to know though is that nobody ever has a perfect prototype. In fact, this is that rare case when it's better not to strive for perfection. Instead, the best approach is to get what you have in front of people sooner rather than later, doing your best to shave off the rough edges so that testers can absorb what you're asking them to do.

# Authentic User Testing

Since user testing should happen earlier rather than later, usually on prototypes, having your test feel as authentic as possible will help you get the most out of it. Here are some tips for keeping it real.

# Build Out the Big Interactions

Build out your interactions as much as you can without going overboard. If an animation is key to understanding something, prototype that. If it's multiple steps, prototype them. When setting up your task prompts, it's fine (and sound) to let participants know that what they're going to be interacting with is a prototype and that some things may not be functional.

# Use Real Devices

If user testing is a form of role-playing, the more authentic the game, the better. This means you're likely to get more meaningful results if you can test your prototype in the same context where it will be used. A fundamental requirement should be that if it's a mobile experience, test it on a touch device. If it's an iOS app prototype, give it to testers who are iPhone users who can interact with it on their actual device. Same goes for Android.

# Thinking Out Loud

Another critical ingredient is asking the participant to think out loud as they work. SoundingBox tasks do this by default, and the think-aloud process fulfills a key promise of user testing and user-centered design in general: we need to try as much as we can to get into the heads of the people we're building our product for.

# Focus Your Goal

It's important to define your research question. What are we trying to learn here? Is it that we want to know whether people can complete this process without getting stuck? Or is it that we want to know what people think of the new home page? Don't try to combine research goals into the same task (sometimes this will happen naturally). Instead, follow the rule: one goal, one task. In SoundingBox you can have multiple tasks in the same study, but only combine tasks into a flow that will feel natural to the participant. Remember that research is something that we design, too.

# Finding Testers

Seems pretty straightforward. Give someone something to try (we usually call these "tasks") and see how they do. But where do you find these people? How many people should you test? SoundingBox can help with screening participants but you can also recruit from your own database, use a third party, or just go grab some people from down the hall.

# Further Reading