Everyone wants to do user testing, but it may not be easy to get started because you’re not sure how to actually plan and conduct a user test. In this post, we’re going to walk through three steps to running a successful user test.
1. Define your goal
The first step is to determine what you want to get out of user testing. Discovering usability problems is a common goal, but you can learn many other things like what features are most engaging, asses the appeal of visual design alternatives, or benchmark the extent to which users identify with a brand. Here are some examples of common UX research questions:
- Do prospects understand what we offer?
- Are there any problems with our prototype of the new checkout process?
- Can people see how to get started doing X?
- Can people find the helpful content we have on the site?
- Is the content we have engaging?
- Are our competitors doing something better than we are which we can learn from?
- Is this proposed solution to a prior problem we’ve found actually better?
- We have 3 visual design mockups, which one is the most appealing? Why?
SoundingBox provides several templates with task prompts and questions to help you get started. Task prompts generally can take two approaches: goal-oriented and open-ended.
Goal-oriented tasks are when you prompt the participant with a specific goal that you want them to try. For example, if you were testing an e-commerce website you might ask them to find a pair of pants under $100 and add it to their cart. This approach is great for gauging usability or if you have specific processes that you want to test like completing an application, or you want to make sure that people can find important functions on your site like how to register.
Open-ended tasks are when you prompt the participant to do something but without a specific end-goal in mind. A basic example is to simply ask participants to explore the site and try things that interest them. You could also give them a little direction, but still keeping it open-ended. For example, you could have people explore but ask them to keep it related to a specific topic. Open-ended approaches are good for getting a sense of what’s engaging or interesting to people. It can also be a good way to start a test and let people get more comfortable with your site or prototype before asking them to try some goal-oriented tasks.
Part of defining your goal is determining who you want to participate in your test. Generally, you would want to recruit participants who fit the profile of typical users of your website or product. SoundingBox provides deep demographic options for recruiting participants, as well as the option to recruit a general population mix. You can also use your own participants, which is often the best choice if you are looking for a very rare or niche profile.
Another important consideration is how many participants to test. This depends on your goals. For example, if you want to compare multiple prototypes or sites, you would want to test at least thirty people per group to have decent statistical confidence. If your goal was to discover the most significant usability problems on just your site, a smaller sample would likely suffice, because most usability problems are revealed after testing five to eight people. Practical considerations, such as budget, are what often actually determine sample size. We've worked hard to reduce participant recruiting costs for SoundingBox so that it’s actually reasonable to run studies with larger sample sizes.
Once you’ve defined your goal, you need to choose what to measure. You might wonder why you should measure anything at all. It’s true that often the insights from user testing come from simply observing what people do. But measuring is also valuable for a few reasons:
Measurements can help accelerate analysis
It can be quite time-consuming to watch many test recordings. Measurements give you a more objective way to sift through the recordings. You might choose to start by watching the sessions where people gave the poorest ratings to the prototype and try to find out why. Or you might look at sessions of who gave it the best ratings to see what’s working well.
Measurements can be more persuasive
If you’re presenting the findings of a test, having some measurements to support your claims can be more compelling then just qualitative data alone. Even if you’re working with a small sample size, measuring provides a more objective a way of evaluating how an experience performed.
Measurements let you compare and track performance
Let’s say you run a user test on a prototype, glean insights from the qualitative data, re-design the prototype and then test it again. How do you determine if the design improved? Without taking measurements in both tests, you can’t objectively say if a design improved or not. Or what if you have two alternative prototypes and you want to find out which one is better? Again, measurements are necessary to truly answer this question.
What to measure
The type of goal you’ve chosen can help determine what to measure. For example, if you want to get a read on the usability of your site, you will want to ask questions that measure how easy people felt the site was to use, how successful they felt attempting the tasks, and what kinds of problems they encountered. Again, if you’re unsure about how to ask these questions, SoundingBox provides templates with questions designed to measure different aspects of experience like usability and conversion likelihood.
When choosing what to measure, it’s often a good idea to ask a mix of quantitative and qualitative questions. Quantitative questions help you get a read on how well your design is performing and qualitative questions help you understand why. The SoundingBox dashboard was designed to help connect your quantitative data with the qualitative through our smart tiles.
Another benefit of quantitative measures is that you can track your performance over time and compare multiple designs or competitor sites in an objective way. The SoundingBox dashboard provides an intuitive view to compare sites within the same test and compare multiple tests side-by-side.
3. Analyze and Iterate
Analyzing the result of a user test can be a slow and tedious process if you follow the standard approach, which is to watch all of the recordings, take notes, and then try to tease out patterns. But this process can be streamlined if you’ve designed good task prompts and taken measurements. SoundingBox helps you get to the insights faster by providing built-in interpretation so that you can start to form impressions right away.
It also provides sorting tools so that you can jump right to the recordings where people had the best experience, or worst experience. SoundingBox also makes it faster to get insights from comparative data by providing side-by-side views of how each site or prototype performed.
Once you’ve analyzed the results, you’ll come away with some insights about how to improve your prototype or site. After you implement those changes, you’ll want to test again to see if they led to measurable improvements. SoundingBox makes it easy to re-run your test, by letting you save any test definition as a template. With each iteration, you can learn new things about how to improve the experience on your site or prototype.