# Study Design
We're aiming to simplify the process of creating excellent UX research that will help move your team forward.
There is no one right way to do UX research. We're working to simplify the process by reducing everything to the essentials, without sacrificing the power to let you answer tough questions.
# Define a Goal
The first step is to determine what you want to get out of your research. Discovering usability problems is a common goal, but you can learn many other things too: like what features are most engaging, assess the appeal of visual design alternatives, or benchmark the extent to which users identify with a brand. Here are some examples of common research questions:
- Do prospects understand what we offer?
- Are there any problems with our prototype of the new checkout process?
- Can people see how to get started doing X?
- Can people find the helpful content we have on the site?
- Is the content we have engaging?
- Are our competitors doing something better than we are which we can learn from?
- Is this proposed solution to a prior problem we’ve found better?
- We have 3 visual design mockups, which one is the most appealing? Why?
# Define a Scope
You're likely building something large and complicated with many moving parts. It's tempting to test everything. While we applaud the ambition, it's better to limit your scope. There are a few ways to do this.
- Use existing flows. - As part of your design process you're likely coming up with user flows—a series of UI states or steps that people will need to work through. A flow can roughly map to a SoundingBox task.
- Think about what people can handle. - In a remote unmoderated test—something SoundingBox excels at—people will spend roughly a 10 or 15 minutes on your test. Try to make whatever you want to test (your scope) fit into that window.
SoundingBox studies consist of a handful of tasks and questions. Make sure that all of your tasks and questions don't exceed a reasonable scope. Don't forget, now that you're a master researcher, you're comfortable running multiple tests to answer different questions. Each can have a scope. Defining a good scope also helps simplify analyzing and communicating results.
# Getting Started
To get started creating your study simply create an account if you haven't already and click on Create a Study, give your study a name, and work through the steps.
Desiging a Study in SoundingBox
Get creative while leveraging best practices with our simple yet powerful study creation process.
# Estimating Your Cost
Our pricing is very straightforward. You pay for two things.
- A monthly subscription that covers data storage and support
- A per-participant charge that is provided when you create your study
The per-participant charge is computed in the following way:
- $5 per screening question that you ask if you choose to screen participants
- $5 per task you ask participants to do
- An additional $5 if you choose to do a think-aloud study
The gist of our pricing model is that studies which require less targeting and work should cost less than studies with more targeting and more work.
# Turnaround Time
Sometimes teams try to put as much stuff as they can think of in a single study. Don't do it! Technically we allow up to 10 tasks in a study, but that doesn't mean you should take advantage of this. Most studies benefit from brevity and a constrained focus. The more narrowly you define your research goal, the more quickly your research will complete. You will answer your research question, and you can move on to the next project, usually in a matter of hours or days for larger studies.
# What to Test
There are three types of tests (what we also call studies) you can run in SoundingBox:
- Live websites - Websites that are currently built out. Can include any publicly available URL.
- Website prototypes - A prototype version of a website. Can be static or interactive, built using any prototyping platform (InVision, Axure, etc.).
- Mobile app prototypes - Prototypes of apps that will run on a touch device such as Apple iOS or Android, usually created using InVision or another type of prototyping tool.
Learn more about the kinds of things you can test in our user testing guide.
SoundingBox can provide participants who fit the profile of typical users of your website or product. The process for doing this is what we call screening. You can also use your participants, which is rarely necessary, but could be, if you are looking for a very rare or niche profile.
# How Many People to Test?
Your sample size (how many participants to test) depends on your goals. For example, if you want to compare multiple prototypes or sites in a split test, you may want to test at least thirty people per group to have decent statistical confidence. If your goal is to discover the most significant usability problems on just one site or prototype, a smaller sample would likely suffice. Practical considerations, such as budget, are what often determine sample size in practice. We've worked hard to reduce participant screening costs for SoundingBox so that it’s reasonable to run studies with larger sample sizes.
Tasks are the activities you ask people to try and are essential to any test. Task prompts generally can take two forms: goal-oriented and open-ended.
Each prototyping platform has a slightly different mechanism for making it possible to share your prototype with the outside world. Here's a tutorial on working with four of the most popular.
# Goal-Oriented Tasks
Goal-oriented tasks are when you prompt the participant with a specific goal that you want them to try. For example, if you were testing an e-commerce website, you might ask them to find a garment under $100 and add it to their cart. This approach is excellent for assessing usability or if you have specific processes that you want to test like completing an application, or you want to make sure that people can navigate to essential processes on your site like how to register.
# Open-Ended Tasks
Open-ended tasks are when you prompt the participant to do something but without a specific end-goal in mind. A basic example is to ask participants to explore the site and try things that interest them. You could also give them a little direction, but still keeping it open-ended. For example, you could have people explore but ask them to keep it related to a specific topic. Open-ended approaches are useful for getting a sense of what’s engaging or interesting to people. It can also be a good way to start a test and let people get more comfortable with your site or prototype before asking them to try some goal-oriented tasks.
# Don't Lead Participants
A good task prompt is one that doesn't lead the participant. Take some time to absorb this idea. Not leading means you're not putting words into people's mouths. Let them tell you what they think. Don't tell them what they should think. It's a baseline research skill. This is also one of the most powerful things about open-ended tasks: you're willing to let the chips fall where they may, and in so doing, you're likely to discover something new.
Goal-oriented tasks should be based on a similar idea. Even though you're giving someone a goal ("find something that appeals to you and buy it") don't lead participants by telling them how to do it ("click the add-to-cart button"). This way you're letting the UI do its work all by itself.
# Post-Task Questions
Asking a participant questions after their task is an essential study building block. Questions, like scales, let you measure things. You might wonder why you should measure anything at all. It’s true that often the insights from basic usability testing come from simply observing what people do and listening to what they say if they think out loud. But measuring is valuable for a few reasons.
# Accelerate Analysis
It can be quite time-consuming to watch many test recordings. Measurements give you a more objective way to sift through the recordings. You might choose to start by watching the sessions where people gave the poorest ratings to the prototype and try to find out why. Or you might look at sessions of who gave it the best scores to see what’s working well. Data blocks are the SoundingBox feature that makes this possible.
# Persuade with Numbers
If you’re presenting the findings of a test, having some measurements to support your claims can be more compelling than just qualitative data alone. Even if you’re working with a small sample size, measuring still provides a more objective a way of evaluating how an experience performed.
# Compare and Track Performance
Let’s say you run a user test on a prototype, glean insights from the qualitative data, re-design the prototype and then test it again. How do you determine if the design improved? Without taking measurements in both tests, it can be hard to say objectively if a design improved.
# What to Measure
The type of goal you’ve chosen can help determine what to measure. For example, if you want to get a read on the usability of your site, you will want to ask questions that measure how easy people felt the site was to use, how successful they felt attempting the tasks, and what kinds of problems they encountered. If you’re unsure about how to ask these questions, SoundingBox provides question templates designed to measure different aspects of experience like usability and conversion likelihood.
# Combine Qualitative and Quantitative
When choosing what to measure, it’s often a good idea to ask a mix of quantitative and qualitative questions. Quantitative questions (like scales) help you learn how well your design is performing. Qualitative questions (like open-ended text questions) help you understand why. The SoundingBox dashboard is designed to help connect your quantitative data with the qualitative through data blocks.
# Go Deeper
If you haven't already, check out some of our other how-tos. They provide additional details on how you can create a studies to address different research goals.
- Find usability problems
- Test prototypes
- Compare design versions
- Learn from competitors
- Learn about customer journey map research
Comparative and competitive tests are based on the split test study architecture.