We're aiming to simplify the process of creating excellent research that will help move your team forward.
There is no one right way to do UX research. We're working to simplify the process by reducing everything to the essentials, without sacrificing the power to let you answer tough questions.
Define your goal
The first step is to determine what you want to get out of your research. Discovering usability problems is a common goal, but you can learn many other things too: like what features are most engaging, assess the appeal of visual design alternatives, or benchmark the extent to which users identify with a brand. Here are some examples of common research questions:
- Do prospects understand what we offer?
- Are there any problems with our prototype of the new checkout process?
- Can people see how to get started doing X?
- Can people find the helpful content we have on the site?
- Is the content we have engaging?
- Are our competitors doing something better than we are which we can learn from?
- Is this proposed solution to a prior problem we’ve found actually better?
- We have 3 visual design mockups, which one is the most appealing? Why?
To get started creating your study simply create an account if you haven't already and click on Create a Study, give your study a name, and work through the steps.
What to test
There are three types of tests (what we also call studies) you can run in SoundingBox:
- Live websites - Websites that are currently built out. Can include any publicly available URL.
- Website prototypes - A prototype version of a website. Can be static or interactive, built using any prototyping platform (Invision, Axure, etc.).
- Mobile app prototypes - Prototypes of apps that will run on a touch device such as Apple iOS or Android, usually created using Invision or another type of prototyping tool.
Types of tests
There are three types of tests you can run on SoundingBox:
- Basic usability test - See how people experience one version of a prototype or live site.
- Experience2 - Compare how people experience two or more sites or prototype versions to determine which one they prefer.
- Journeys - A test that allows you to see how people make decisions about how they shop for particular product or service across websites.
Asking users to think out loud as they work is an option with any test type. Choosing it as an option adds $5 per participant on to your test, so only do it if you're going to have time to review the responses in depth.
Thinking aloud can be chosen as an option on the first step of the study create process.
SoundingBox can provide participants who fit the profile of typical users of your website or product. The process for doing this is what we call screening. You can also use your participants, which is rarely necessary, but could be, if you are looking for a very rare or niche profile.
How many people to test?
Your sample size (how many participants to test) depends on your goals. For example, if you want to compare multiple prototypes or sites in an experience2 test, you may want to test at least thirty people per group to have decent statistical confidence. If your goal is to discover the most significant usability problems on just one site or prototype, a smaller sample would likely suffice. Practical considerations, such as budget, are what often determine sample size in practice. We've worked hard to reduce participant screening costs for SoundingBox so that it’s reasonable to run studies with larger sample sizes.
Getting to know tasks
Tasks are the activities you ask people to try and are essential to any test. Task prompts generally can take two forms: goal-oriented and open-ended.
Goal-oriented tasks are when you prompt the participant with a specific goal that you want them to try. For example, if you were testing an e-commerce website, you might ask them to find a garment under $100 and add it to their cart. This approach is excellent for assessing usability or if you have specific processes that you want to test like completing an application, or you want to make sure that people can navigate to essential processes on your site like how to register.
Open-ended tasks are when you prompt the participant to do something but without a specific end-goal in mind. A basic example is to ask participants to explore the site and try things that interest them. You could also give them a little direction, but still keeping it open-ended. For example, you could have people explore but ask them to keep it related to a specific topic. Open-ended approaches are useful for getting a sense of what’s engaging or interesting to people. It can also be a good way to start a test and let people get more comfortable with your site or prototype before asking them to try some goal-oriented tasks.
Asking questions after tasks
Asking a participant questions after their task is an essential study building block. Questions, like scales, let you measure things. You might wonder why you should measure anything at all. It’s true that often the insights from basic usability testing come from simply observing what people do and listening to what they say if they think out loud. But measuring is valuable for a few reasons.
Measurements can help accelerate analysis
It can be quite time-consuming to watch many test recordings. Measurements give you a more objective way to sift through the recordings. You might choose to start by watching the sessions where people gave the poorest ratings to the prototype and try to find out why. Or you might look at sessions of who gave it the best scores to see what’s working well.
Measurements can be more persuasive
If you’re presenting the findings of a test, having some measurements to support your claims can be more compelling than just qualitative data alone. Even if you’re working with a small sample size, measuring still provides a more objective a way of evaluating how an experience performed.
Measurements let you compare and track performance
Let’s say you run a user test on a prototype, glean insights from the qualitative data, re-design the prototype and then test it again. How do you determine if the design improved? Without taking measurements in both tests, it can be hard to say objectively if a design improved or not.
What to measure
The type of goal you’ve chosen can help determine what to measure. For example, if you want to get a read on the usability of your site, you will want to ask questions that measure how easy people felt the site was to use, how successful they felt attempting the tasks, and what kinds of problems they encountered. If you’re unsure about how to ask these questions, SoundingBox provides question templates designed to measure different aspects of experience like usability and conversion likelihood.
Combine qualitative and quantitative
When choosing what to measure, it’s often a good idea to ask a mix of quantitative and qualitative questions. Quantitative questions (like scales) help you learn how well your design is performing. Qualitative questions (like open-ended text questions) help you understand why. The SoundingBox dashboard was designed to help connect your quantitative data with the qualitative through our smart tiles.